デジタルヒューマン
キーワード:generative AI; 3D and 4D capture; motion retargeting; gesture
2023.01.
THOMAS DIEGO GABRIEL FRANCIS(トマ ディエゴ ガブリエル フランシス) | データ更新日:2024.03.29 |
主な研究テーマ
Aerial-based outdoor 3D scene mapping
キーワード:aerial drone; RGB-D SLAM; outdoor scene
2020.04~2022.04.
キーワード:aerial drone; RGB-D SLAM; outdoor scene
2020.04~2022.04.
AI-based avatar animation synthesis
キーワード:deep learning; avatar animation; dense deformation; texture
2021.06~2022.06.
キーワード:deep learning; avatar animation; dense deformation; texture
2021.06~2022.06.
一枚の画像からの3D形状
キーワード:Deep learning, 3D shape estimation
2019.04~2021.08.
キーワード:Deep learning, 3D shape estimation
2019.04~2021.08.
幼児教育のためにバーチャルアシスタント
キーワード:三次元シーン理解、教育アプリケーション、三次元バーチャルアシスタント
2018.05~2020.06.
キーワード:三次元シーン理解、教育アプリケーション、三次元バーチャルアシスタント
2018.05~2020.06.
複数のカメラで高フレームレート3D再構成
キーワード:RGB-D camera; high frame rate; multi-view set-up; real time; distributed system; GPU optimization; volumetric reconstruction; fast and uncontrolled motion
2017.12~2018.02.
キーワード:RGB-D camera; high frame rate; multi-view set-up; real time; distributed system; GPU optimization; volumetric reconstruction; fast and uncontrolled motion
2017.12~2018.02.
動的シーンにおける人体の3次元再構成
キーワード:RGB-D camera; fast motion; skeleton; deforming bounding boxes; volumetric depth fusion; ICP; GPU optimization; large-scale scene
2017.04~2018.02.
キーワード:RGB-D camera; fast motion; skeleton; deforming bounding boxes; volumetric depth fusion; ICP; GPU optimization; large-scale scene
2017.04~2018.02.
顔の3次元再構成と表現の追跡
キーワード:RGB-D camera; Facial expression; Blendshape; Template mesh; Texturing; 3D modeling; Retargeting; Deviation mapping; Real-time.
2015.04~2018.02.
キーワード:RGB-D camera; Facial expression; Blendshape; Template mesh; Texturing; 3D modeling; Retargeting; Deviation mapping; Real-time.
2015.04~2018.02.
3次元モデリング
キーワード:RGB-D カメラ; SLAM; 3次元モデリング
2012.04~2017.04.
キーワード:RGB-D カメラ; SLAM; 3次元モデリング
2012.04~2017.04.
従事しているプロジェクト研究
NerF-based multi-view 3D shape reconstruction using Centroidal Voronoi Tessellation
2022.04~2023.06, 代表者:Diego Thomas, Kyushu University, Kyushu University (Japan)
多視点画像から高解像度の 3D メッシュを再構成するために、3D 空間の 3D 形状、外観、離散化を共同で最適化するための CVT の使用を調査します。.
2022.04~2023.06, 代表者:Diego Thomas, Kyushu University, Kyushu University (Japan)
多視点画像から高解像度の 3D メッシュを再構成するために、3D 空間の 3D 形状、外観、離散化を共同で最適化するための CVT の使用を調査します。.
Multi-view 3D pedestrian localisation
2021.03~2023.04, 代表者:Joao Paulo Lima, University of Pernambuco, Brazil, Brazil
The project is about identifying, localizing and tracking pedestrians in 3D from multi-view videos..
2021.03~2023.04, 代表者:Joao Paulo Lima, University of Pernambuco, Brazil, Brazil
The project is about identifying, localizing and tracking pedestrians in 3D from multi-view videos..
Realistic environment rendering with real humans for architecture project visualization
2021.04~2022.05, 代表者:Diego Thomas, Kyushu University
This is a joint project with Professor Koga (architecture design) and Professor Ochiai (Maths for industry) about generating immersive virtual environments of architectural project to support design and evaluation..
2021.04~2022.05, 代表者:Diego Thomas, Kyushu University
This is a joint project with Professor Koga (architecture design) and Professor Ochiai (Maths for industry) about generating immersive virtual environments of architectural project to support design and evaluation..
Deep human avatar animation
2021.05~2022.05, 代表者:Diego Thomas, Kyushu University, Japan
This is a joint research project with HUAWEI about learning to generate avatar animations from 2D videos in real-time.
2021.05~2022.05, 代表者:Diego Thomas, Kyushu University, Japan
This is a joint research project with HUAWEI about learning to generate avatar animations from 2D videos in real-time.
Dynamic human motion tracking using dual quaternion algebra
2020.07~2022.03, 代表者:Stephane Breuil, National Institute of Informatics, Japan
Joint research project with Vincent Nozick from Gustave-Eiffel University in France . This project is about reconstructing non-rigid motion of human bodies captured by RGB-D cameras..
2020.07~2022.03, 代表者:Stephane Breuil, National Institute of Informatics, Japan
Joint research project with Vincent Nozick from Gustave-Eiffel University in France . This project is about reconstructing non-rigid motion of human bodies captured by RGB-D cameras..
Weakly-supervised human 3D body shape estimation from single images
2020.09~2021.08, 代表者:Jane Wu, Standford, U.S.A
We are working on a solution to learn to estimate 3D shape of human bodies from 2D observation in an unsupervised manner..
2020.09~2021.08, 代表者:Jane Wu, Standford, U.S.A
We are working on a solution to learn to estimate 3D shape of human bodies from 2D observation in an unsupervised manner..
Personalized avatars with real emotions for next generation holoportation systems
2020.01~2021.01, 代表者:Diego Thomas, Kyushu University, Microsoft Research Asia
Personalized avatars are the key towards more natural communication in the virtual space. If you can express yourself with not only your own voice, but your own body, expressions or emotions it allows you to better communicate. This is also a powerful way to avoid being cheated by fake characters. And there is a huge demand for real avatars and emotes, with a big business opportunity. When communicating in the virtual space it is important to transmit real expressions and real emotions, but it is also important to keep the possibility to remain anonymous. While ultra-realistic avatars that have someone’s own appearance, skin and face will surely break anonymity, body motion and gesture can convey a large part of real expressions and emotions without revealing a person’s identity. In this project, we aim at capturing full body 3D motion and fine gestures and re-targeting them into a mixed reality telepresence system (also called holoportation) deployed on the Microsoft Hololens. To achieve our objective there are three main challenges to tackle: (1) detailed 3D motion of the human body must be captured from standard RGB cameras; (2) the human motion must be faithfully re-targeted to a virtual avatar, which may have different animation characteristics than the human; (3) the avatar must be displayed in 3D with the Hololens while considering the surrounding illumination conditions. Fundamental findings unveiled in the project will provide new insights for human motion estimation, re-targeting to other bodies with different kinematics and environment mapping with mixed reality devices..
2020.01~2021.01, 代表者:Diego Thomas, Kyushu University, Microsoft Research Asia
Personalized avatars are the key towards more natural communication in the virtual space. If you can express yourself with not only your own voice, but your own body, expressions or emotions it allows you to better communicate. This is also a powerful way to avoid being cheated by fake characters. And there is a huge demand for real avatars and emotes, with a big business opportunity. When communicating in the virtual space it is important to transmit real expressions and real emotions, but it is also important to keep the possibility to remain anonymous. While ultra-realistic avatars that have someone’s own appearance, skin and face will surely break anonymity, body motion and gesture can convey a large part of real expressions and emotions without revealing a person’s identity. In this project, we aim at capturing full body 3D motion and fine gestures and re-targeting them into a mixed reality telepresence system (also called holoportation) deployed on the Microsoft Hololens. To achieve our objective there are three main challenges to tackle: (1) detailed 3D motion of the human body must be captured from standard RGB cameras; (2) the human motion must be faithfully re-targeted to a virtual avatar, which may have different animation characteristics than the human; (3) the avatar must be displayed in 3D with the Hololens while considering the surrounding illumination conditions. Fundamental findings unveiled in the project will provide new insights for human motion estimation, re-targeting to other bodies with different kinematics and environment mapping with mixed reality devices..
Unifying multiple RGB and depth cameras for real-time large-scale dynamic 3D modeling with unmanned micro aerial vehicles.
2019.04~2021.04, 代表者:Diego Thomas, Kyushu University, KAKENHI
このプロジェクトは、無人のマイクロ航空車両からの大規模な動的シーンのリアルタイム三次元復元に関する研究です。目的は、大規模な動的三次元シーンのリアルタイム三次元復元のために、複数のマイクロ航空機に搭載された複数のRGB画像と距離画像の融合を調査することです。ここで公開される基本的な手法は、大規模な動的三次元モデルを作成し、リアルタイムで動的な三次元シーンを理解するために必要なアルゴリズムを提供するために使用されます。.
2019.04~2021.04, 代表者:Diego Thomas, Kyushu University, KAKENHI
このプロジェクトは、無人のマイクロ航空車両からの大規模な動的シーンのリアルタイム三次元復元に関する研究です。目的は、大規模な動的三次元シーンのリアルタイム三次元復元のために、複数のマイクロ航空機に搭載された複数のRGB画像と距離画像の融合を調査することです。ここで公開される基本的な手法は、大規模な動的三次元モデルを作成し、リアルタイムで動的な三次元シーンを理解するために必要なアルゴリズムを提供するために使用されます。.
Facial motion capture
2017.10~2018.09, 代表者:Sun Fujiang President of Huawei Japan Research Center , Huawei Technologies Japan K.K (China).
This project is divided into three stages, the first stage is that roughly evaluates our base algorithm, and the second stage is that evaluates the robustness for overall reconstruction (expression) ability of the facial impression transfer to any 3D avatar by any person. And the third stage is that improves facial model quality (as for providing complete facial model, we need to add eye ball and mouth)..
2017.10~2018.09, 代表者:Sun Fujiang President of Huawei Japan Research Center , Huawei Technologies Japan K.K (China).
This project is divided into three stages, the first stage is that roughly evaluates our base algorithm, and the second stage is that evaluates the robustness for overall reconstruction (expression) ability of the facial impression transfer to any 3D avatar by any person. And the third stage is that improves facial model quality (as for providing complete facial model, we need to add eye ball and mouth)..
研究業績
主要原著論文
主要学会発表等
学会活動
学会大会・会議・シンポジウム等における役割
2024.08.06~2024.08.09, 第27回 画像の認識・理解シンポジウム MIRU2024, Area chair.
2022.02.22~2022.03.01, AAAI 2022, Senior Program Committee.
2022.06.19~2022.06.24, CVPR2022, Program commitee.
2021.03~2021.05.19, WACV 2021, Program committee.
2021.08~2021.05.19, 30th International Joint Conference on Artificial Intelligence (IJCAI-21), Senior Program Committee Member.
2021.06~2021.05.19, CVPR 2021, Program committee.
2021.03~2021.05.19, 情報処理学会第83回全国大会, 講演座長.
2020.11.25~2020.11.28, 3D Vision (3DV 2020), Local chair.
2020.06.14~2020.06.19, CVPR 2020, Program committee .
2020.03.01~2020.03.05, WACV 2020, Program committee .
2019.11.20~2019.11.23, Machine Perception and Robotics (MPR 2019), Program chair.
2019.11.18~2019.11.22, The 9th Pacific-Rim Symposium on Image and Video Technology (PSIVT 2019), Area chair.
2018.05.14~2018.05.15, The 12th International Workshop on Information Search, Integration, and Personalization (ISIP2018), Publicity chair.
2018.05.14~2018.05.15, The 12th International Workshop on Information Search, Integration, and Personalization (ISIP 2018), Publicity chair.
2017.09.25~2017.09.26, JFLI-KYUDAI JOINT WORKSHOP ON INFORMATICS, Local arrangement chair.
2016.11.27~2016.12.01, SITIS2016, Program Committee.
2016.08.01~2016.08.04, MIRU2016, Program Committee.
2015.07.27~2015.07.30, MIRU2015, Program Committee.
2014.07.28~2014.07.31, MIRU2014, Program committee.
学術論文等の審査
年度 | 外国語雑誌査読論文数 | 日本語雑誌査読論文数 | 国際会議録査読論文数 | 国内会議録査読論文数 | 合計 |
---|---|---|---|---|---|
2023年度 | 3 | 25 | 3 | 31 | |
2022年度 | 8 | 18 | 3 | 29 | |
2020年度 | 4 | 2 | 25 | 4 | 35 |
2019年度 | 15 | 25 | 40 | ||
2015年度 | 1 | 12 | 13 | ||
2016年度 | 2 | 19 | 1 | 22 | |
2017年度 | 10 | 24 | 34 | ||
2018年度 | 20 | 20 | 40 |
その他の研究活動
海外渡航状況, 海外での教育研究歴
Center for Machine Perception (CMP), CzechRepublic, 2010.02~2010.02.
INRIA Grenoble, France, 2016.12~2016.12.
Microsoft Research Asia, China, 2011.03~2011.07.
受賞
Best poster presentation award, Machine Perception and Robotics (MPR 2019), 2019.11.
Best paper award, The 9th Pacific-Rim Symposium on Image and Video Technology (PSIVT 2019), 2019.11.
Best poster award, IW-FCV2019, 2019.02.
Outstanding research achievement and contribution to ASPCIT 2019 Annual Meeting Invited Presentation, Asia Pacific Society for Computing and Information Technology, 2019.07.
Outstanding reviewer, MIRU 2015, 2015.07.
Best student award, National Insitute of Informatics, 2012.03.
研究資金
科学研究費補助金の採択状況(文部科学省、日本学術振興会)
2023年度~2025年度, 基盤研究(B), 代表, A new data-driven approach to bring humanity into virtual worlds with computer vision.
2019年度~2020年度, 若手研究, 代表, Unifying multiple RGB and depth cameras for real-time large-scale dynamic 3D modeling with unmanned micro aerial vehicles.
2015年度~2017年度, 特別研究員奨励費, 代表, Large-scale and dynamic 3D reconstruction using an RGB-D camera.
日本学術振興会への採択状況(科学研究費補助金以外)
2022年度~2023年度, JSPS Invitational Fellowships for Research in Japan (short term), 代表, Multi-Camera 3D Pedestrian Detection with Domain Adaptation and Generalization.
共同研究、受託研究(競争的資金を除く)の受入状況
2020.04~2021.03, 代表, Human body 3D shape estimation, animation and gesture synthesis.
2021.06~2022.05, 代表, AI-based animation of 3D avatars..
2017.09~2018.08, 連携, Facial motion capture system.
学内資金・基金等への採択状況
2021年度~2022年度, QR Tsubasa (つばさプロジェクト), 代表, A new approach for supporting architectural works with virtual reality environments..
2020年度~2022年度, SENTAN-Q, 代表, 2 years training and international research.
2020年度~2020年度, QR Wakaba challenge, 代表, 3D shape estimation and motion retargeting from 2D videos for future Holoportation systems..
2017年度~2018年度, スタートアップ支援経費, 分担, Free-form dynamic 3D scene reconstruction at high resolution.
本データベースの内容を無断転載することを禁止します。
九大関連コンテンツ
QIR 九州大学学術情報リポジトリ システム情報科学研究院
システム情報科学研究院
- デバイスシミュレータを用いた論理回路のソフトエラー解析
- ハイブリッド環境下の大学図書館における学術情報サービスの構築
- A Run-Time Power Analysis Method using OS-Observable Parameters for Mobile Terminals
- Microwave-assisted magnetization reversal in a Co/Pd multilayer with perpendicular magnetic ...
- Modeling Costs of Access Control with Various Key Management Systems