Updated on 2024/10/08

Information

 

写真a

 
IWAGUCHI TAKAFUMI
 
Organization
Faculty of Information Science and Electrical Engineering Department of Advanced Information Technology Assistant Professor
School of Engineering Department of Electrical Engineering and Computer Science(Concurrent)
Graduate School of Information Science and Electrical Engineering Department of Information Science and Technology(Concurrent)
Title
Assistant Professor
Profile
My research interests lie in computer vision, especially in light transport analysis and computational photography.
External link

Degree

  • Doctor of Engineering

Research History

  • 2017.8-2018.3, カーネギーメロン大学,客員研究員   

Research Interests・Research Keywords

  • Research theme: 3D reconstruction of underwater scene

    Keyword: computer vision,underwater 3D reconstruction,robot operation

    Research period: 2022.4

  • Research theme: Scene analysis using light transport

    Keyword: computer vision, light transport analysis

    Research period: 2019.4

Papers

  • Programmable Non-Epipolar Indirect Light Transport: Capture and Analysis Reviewed International journal

    H. Kubo, S. Jayasuriya, T. Iwaguchi, T. Funatomi, Y. Mukaigawa

    IEEE Transactions on Visualization and Computer Graphics   2019.10

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

  • Deep learning approach using SPECT-to-PET translation for attenuation correction in CT-less myocardial perfusion SPECT imaging

    Kawakubo, M; Nagao, M; Kaimoto, Y; Nakao, R; Yamamoto, A; Kawasaki, H; Iwaguchi, T; Matsuo, Y; Kaneko, K; Sakai, A; Sakai, S

    ANNALS OF NUCLEAR MEDICINE   38 ( 3 )   199 - 209   2024.3   ISSN:0914-7187 eISSN:1864-6433

     More details

    Language:English   Publisher:Annals of Nuclear Medicine  

    Objective: Deep learning approaches have attracted attention for improving the scoring accuracy in computed tomography-less single photon emission computed tomography (SPECT). In this study, we proposed a novel deep learning approach referring to positron emission tomography (PET). The aims of this study were to analyze the agreement of representative voxel values and perfusion scores of SPECT-to-PET translation model-generated SPECT (SPECTSPT) against PET in 17 segments according to the American Heart Association (AHA). Methods: This retrospective study evaluated the patient-to-patient stress, resting SPECT, and PET datasets of 71 patients. The SPECTSPT generation model was trained (stress: 979 image pairs, rest: 987 image pairs) and validated (stress: 421 image pairs, rest: 425 image pairs) using 31 cases of SPECT and PET image pairs using an image-to-image translation network. Forty of 71 cases of left ventricular base-to-apex short-axis images were translated to SPECTSPT in the stress and resting state (stress: 1830 images, rest: 1856 images). Representative voxel values of SPECT and SPECTSPT in the 17 AHA segments against PET were compared. The stress, resting, and difference scores of 40 cases of SPECT and SPECTSPT were also compared in each of the 17 segments. Results: For AHA 17-segment-wise analysis, stressed SPECT but not SPECTSPT voxel values showed significant error from PET at basal anterior regions (segments #1, #6), and at mid inferoseptal regions (segments #8, #9, and #10). SPECT, but not SPECTSPT, voxel values at resting state showed significant error at basal anterior regions (segments #1, #2, and #6), and at mid inferior regions (segments #8, #9, and #11). Significant SPECT overscoring was observed against PET in basal-to-apical inferior regions (segments #4, #10, and #15) during stress. No significant overscoring was observed in SPECTSPT at stress, and only moderate over and underscoring in the basal inferior region (segment #4) was found in the resting and difference states. Conclusions: Our PET-supervised deep learning model is a new approach to correct well-known inferior wall attenuation in SPECT myocardial perfusion imaging. As standalone SPECT systems are used worldwide, the SPECTSPT generation model may be applied as a low-cost and practical clinical tool that provides powerful auxiliary information for the diagnosis of myocardial blood flow.

    DOI: 10.1007/s12149-023-01889-y

    Web of Science

    Scopus

    PubMed

  • Gated SPECT-Derived Myocardial Strain Estimated From Deep-Learning Image Translation Validated From N-13 Ammonia PET

    Kawakubo M., Nagao M., Yamamoto A., Kaimoto Y., Nakao R., Kawasaki H., Iwaguchi T., Inoue A., Kaneko K., Sakai A., Sakai S.

    Academic Radiology   2024   ISSN:10766332

     More details

    Language:English   Publisher:Academic Radiology  

    Rationale and Objectives: This study investigated the use of deep learning-generated virtual positron emission tomography (PET)-like gated single-photon emission tomography (SPECTVP) for assessing myocardial strain, overcoming limitations of conventional SPECT. Materials and Methods: SPECT-to-PET translation models for short-axis, horizontal, and vertical long-axis planes were trained using image pairs from the same patients in stress (720 image pairs from 18 patients) and resting states (920 image pairs from 23 patients). Patients without ejection-fraction changes during SPECT and PET were selected for training. We independently analyzed circumferential strains from short-axis-gated SPECT, PET, and model-generated SPECTVP images using a feature-tracking algorithm. Longitudinal strains were similarly measured from horizontal and vertical long-axis images. Intraclass correlation coefficients (ICCs) were calculated with two-way random single-measure SPECT and SPECTVP (PET). ICCs (95% confidence intervals) were defined as excellent (≥ 0.75), good (0.60–0.74), moderate (0.40–0.59), or poor (≤ 0.39). Results: Moderate ICCs were observed for SPECT-derived stressed circumferential strains (0.56 [0.41–0.69]). Excellent ICCs were observed for SPECTVP-derived stressed circumferential strains (0.78 [0.68–0.85]). Excellent ICCs of stressed longitudinal strains from horizontal and vertical long axes, derived from SPECT and SPECTVP, were observed (0.83 [0.73–0.90], 0.91 [0.85–0.94]). Conclusion: Deep-learning SPECT-to-PET transformation improves circumferential strain measurement accuracy using standard-gated SPECT. Furthermore, the possibility of applying longitudinal strain measurements via both PET and SPECTVP was demonstrated. This study provides preliminary evidence that SPECTVP obtained from standard-gated SPECT with postprocessing potentially adds clinical value through PET-equivalent myocardial strain analysis without increasing the patient burden.

    DOI: 10.1016/j.acra.2024.06.047

    Scopus

    PubMed

  • ActiveNeuS: Neural Signed Distance Fields for Active Stereo

    Ichimaru, K; Ikeda, T; Thomas, D; Iwaguchi, T; Kawasaki, H

    2024 INTERNATIONAL CONFERENCE IN 3D VISION, 3DV 2024   539 - 548   2024   ISSN:2378-3826 ISBN:979-8-3503-6246-6 eISSN:2475-7888

     More details

    Publisher:Proceedings - 2024 International Conference on 3D Vision, 3DV 2024  

    3D-shape reconstruction in extreme environments, such as low illumination or scattering condition, has been an open problem and intensively researched. Active stereo is one of potential solution for such environments for its robustness and high accuracy. However, active stereo systems usually consist of specialized system configurations with complicated algorithms, which narrow their application. In this paper, we propose Neural Signed Distance Field for active stereo systems to enable implicit correspondence search and triangulation in generalized Structured Light. With our technique, textureless or equivalent surfaces by low light condition are successfully reconstructed even with a small number of captured images. Experiments were conducted to confirm that the proposed method could achieve state-of-the-art reconstruction quality under such severe condition. We also demonstrated that the proposed method worked in an underwater scenario.

    DOI: 10.1109/3DV62453.2024.00014

    Web of Science

    Scopus

  • Underwater Image Enhancement by Transformer-based Diffusion Model with Non-uniform Sampling for Skip Strategy

    Tang, Y; Kawasaki, H; Iwaguchi, T

    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023   5419 - 5427   2023   ISBN:979-8-4007-0108-5

     More details

    Publisher:MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia  

    In this paper, we present an approach to image enhancement with diffusion model in underwater scenes. Our method adapts conditional denoising diffusion probabilistic models to generate the corresponding enhanced images by using the underwater images and the Gaussian noise as the inputs. Additionally, in order to improve the efficiency of the reverse process in the diffusion model, we adopt two different ways. We firstly propose a lightweight transformer-based denoising network, which can effectively promote the time of network forward per iteration. On the other hand, we introduce a skip sampling strategy to reduce the number of iterations. Besides, based on the skip sampling strategy, we propose two different non-uniform sampling methods for the sequence of the time step, namely piecewise sampling and searching with the evolutionary algorithm. Both of them are effective and can further improve performance by using the same steps against the previous uniform sampling. In the end, we conduct a relative evaluation of the widely used underwater enhancement datasets between the recent state-of-the-art methods and the proposed approach. The experimental results prove that our approach can achieve both competitive performance and high efficiency. Our code is available at https://github.com/piggy2009/DM-underwater.

    DOI: 10.1145/3581783.3612378

    Web of Science

    Scopus

  • Surface normal estimation from optimized and distributed light sources using DNN-based photometric stereo

    Iwaguchi, T; Kawasaki, H

    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV)   311 - 320   2023   ISSN:2472-6737 ISBN:978-1-6654-9346-8

     More details

    Publisher:Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023  

    Photometric stereo (PS) is a major technique to recover surface normal for each pixel. However, since it assumes Lambertian surface and directional light to estimate the value, a large number of images are usually required to avoid the effects of outliers and noise. In this paper, we propose a technique to reduce the number of images by using distributed light sources, where the patterns are optimized by a deep neural network (DNN). In addition, to efficiently realize the distributed light, we use an optical diffuser with a video projector, where the diffuser is illuminated by the projector from behind, the illuminated area on the diffuser works as if an arbitrary-shaped area light. To estimate the surface normal using the distributed light source, we propose a near-light photometric stereo (NLPS) using DNN. Since optimization of the pattern of distributed light is achieved by a differentiable renderer, it is connected with NLPS network, achieving end-to-end learning. The experiments are conducted to show the successful estimation of the surface normal by our method from a small number of images.

    DOI: 10.1109/WACV56688.2023.00039

    Web of Science

    Scopus

  • AutoEnhancer: Transformer on U-Net Architecture Search for Underwater Image Enhancement

    Tang, Y; Iwaguchi, T; Kawasaki, H; Sagawa, R; Furukawa, R

    COMPUTER VISION - ACCV 2022, PT III   13843   120 - 137   2023   ISSN:0302-9743 ISBN:978-3-031-26312-5 eISSN:1611-3349

     More details

    Publisher:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    Deep neural architecture plays an important role in underwater image enhancement in recent years. Although most approaches have successfully introduced different structures (e.g., U-Net, generative adversarial network (GAN) and attention mechanisms) and designed individual neural networks for this task, these networks usually rely on the designer’s knowledge, experience and intensive trials for validation. In this paper, we employ Neural Architecture Search (NAS) to automatically search the optimal U-Net architecture for underwater image enhancement, so that we can easily obtain an effective and lightweight deep network. Besides, to enhance the representation capability of the neural network, we propose a new search space including diverse operators, which is not limited to common operators, such as convolution or identity, but also transformers in our search space. Further, we apply the NAS mechanism to the transformer and propose a selectable transformer structure. In our transformer, the multi-head self-attention module is regarded as an optional unit and different self-attention modules can be used to replace the original one, thus deriving different transformer structures. This modification is able to further expand the search space and boost the learning capability of the deep model. The experiments on widely used underwater datasets are conducted to show the effectiveness of the proposed method. The code is released at https://github.com/piggy2009/autoEnhancer.

    DOI: 10.1007/978-3-031-26313-2_8

    Web of Science

    Scopus

  • Self-calibration of multiple-line-lasers based on coplanarity and Epipolar constraints for wide area shape scan using moving camera

    Nagamatsu, G; Ikeda, T; Iwaguchi, T; Thomas, D; Takamatsu, J; Kawasaki, H

    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)   2022-August   3959 - 3965   2022   ISSN:1051-4651 ISBN:978-1-6654-9062-7

     More details

    Publisher:Proceedings - International Conference on Pattern Recognition  

    High-precision three-dimensional scanning systems have been intensively researched and developed. Recently, for acquisition of large scale scene with high density, simultaneous localisation and mapping (SLAM) technique is preferred because of its simplicity; a single sensor that is moved around freely during 3D scanning. However, to integrate multiple scans, captured data as well as position of each sensor must be highly accurate, making these systems difficult to use in environments not accessible by humans, such as underwater, internal body, or outer space. In this paper, we propose a new, flexible system with multiple line lasers that reconstructs dense and accurate 3D scenes. The advantages of our proposed system are (1) no need of synchronization nor precalibration between lasers and a camera, and (2) the system can reconstruct 3D scenes in extreme conditions, such as underwater. We propose a new self-calibration method leveraging coplanarity and Epipolar constraints is proposed. We also propose a new bundle adjustment (BA) technique that is tailored to the system for a dense integration of multiple line laser scans. Experimental evaluation in both air and underwater environments confirms the advantages of the proposed method.

    DOI: 10.1109/ICPR56361.2022.9956128

    Web of Science

    Scopus

  • ROBUST CALIBRATION-MARKER AND LASER-LINE DETECTION FOR UNDERWATER 3D SHAPE RECONSTRUCTION BY DEEP NEURAL NETWORK

    Wang, HB; Iwaguchi, T; Kawasaki, H

    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP   4243 - 4247   2022   ISSN:1522-4880 ISBN:978-1-6654-9620-9

     More details

    Publisher:Proceedings - International Conference on Image Processing, ICIP  

    There are various demands for underwater 3D reconstruction, however, since most active stereo 3D reconstruction methods focus on the air environment, it is difficult to directly apply them to underwater due to the several critical reasons, such as refraction, water flow and severe attenuation. Typically, calibration-markers or laser-lines are strongly blurred and saturated by attenuation, which makes difficult to recover shape in the water. Another problem is that it is difficult to keep cameras, projectors and objects static in the water because of strong water flow, which prevents accurate calibration. In this paper, we propose a method to solve those problems by novel algorithm using deep neural network (DNN), epipolar constraint and specially designed devices. We also built a real system and tested it in the water, e.g., pool and sea. Experimental results confirmed the effectiveness of the proposed method. We also demonstrated real 3D scan in the sea.

    DOI: 10.1109/ICIP46576.2022.9897733

    Web of Science

    Scopus

  • Auto-augmentation with Differentiable Renderer for High-frequency Shape Recovery

    Tokieda, K; Iwaguchi, T; Kawasaki, H

    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)   2022-August   3952 - 3958   2022   ISSN:1051-4651 ISBN:978-1-6654-9062-7

     More details

    Publisher:Proceedings - International Conference on Pattern Recognition  

    We propose a technique to estimate a high-resolution depth image from a sparse depth image captured by depth camera and a high-resolution shading image obtained by a RGB camera using deep neural network (DNN). In our technique, the network model is pretrained by synthetic images which are generated by rendering high-frequency shapes created by arithmetic model, such as sinusoidal wave of wide variation of parameters. Although the preparation of an appropriate synthetic dataset is critical for such tasks, it is not trivial to find a compact and optimal distribution of shape parameters. In this paper, we propose an auto augmentation technique to optimize hyperparameters for shapes achieving minimum number for training DNN. The proposed augmentation network directly optimizes the hyperparameters of a 3D scene including parameters of procedural shapes and their positions by gradient descent algorithm via a differentiable rendering technique. Unlike previous data augmentation techniques which only have basic image processing methods, such as affine and color transformations, the proposed method can generate optimal training dataset by changing the 3D shape and its position by using a differentiable renderer. In our experiments, we confirmed that our method improved the accuracy of high-resolution depth estimation as well as efficiency of training the network.

    DOI: 10.1109/ICPR56361.2022.9956528

    Web of Science

    Scopus

  • Optical tomography based on shortest-path model for diffuse surface object Reviewed

    Takafumi Iwaguchi, Takuya Funatomi, Takahito Aoto, Hiroyuki Kubo, Yasuhiro Mukaigawa

    IPSJ Transactions on Computer Vision and Applications   10 ( 1 )   2018.12

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    We tackle an optical measurement of the internal structure of a diffuse surface object—we define as an object that has a diffuse surface and its interior is transparent, like grapes or hollow plastic bottles. Our approach is based on optical tomography that reconstructs the interior from observations of absorption of light rays from various views, under the projection of the light. The difficulty lies in the fact that a light ray that enters changes its direction at the interaction of the surface, unlike X-ray that travels straight through the object. We introduce a model of light path in the object called shortest-path model. We acquire the absorption of light rays through the object by the measurement upon the assumption of the model. Since this measurement acquires insufficient observation to reconstruct the interior by conventional reconstruction algorithms, we also introduce a reconstruction method based on a numerical optimization that a physical requirement of the absorption is taken into account. Our method is confirmed successful to measure the interior in a real-world experiment.

    DOI: 10.1186/s41074-018-0051-x

  • Acquiring short range 4D light transport with synchronized projector camera system

    Takafumi Iwaguchi, Hiroyuki Kubo, Takuya Funatomi, Yasuhiro Mukaigawa, Srinivasa Narasimhan

    24th ACM Symposium on Virtual Reality Software and Technology, VRST 2018 Proceedings - VRST 2018 24th ACM Symposium on Virtual Reality Software and Technology   2018.11

     More details

    Language:English   Publishing type:Research paper (other academic)  

    Light interacts with a scene in various ways. For scene understanding, a light transport is useful because it describes a relationship between the incident light ray and the result of the interaction. Our goal is to acquire the 4D light transport between the projector and the camera, focusing on direct and short-range transport that include the effect of the diffuse reflections, subsurface scattering, and inter-reflections. The acquisition of the light transport is challenging since the acquisition of the full 4D light transport requires a large number of measurement. We propose an efficient method to acquire short range light transport, which is dominant in the general scene, using synchronized projector-camera system. We show the transport profile of various materials, including uniform or heterogeneous subsurface scattering.

    DOI: 10.1145/3281505.3283377

  • Acquiring and characterizing plane-to-ray indirect light transport

    Hiroyuki Kubo, Suren Jayasuriya, Takafumi Iwaguchi, Takuya Funatomi, Yasuhiro Mukaigawa, Srinivasa G. Narasimhan

    2018 IEEE International Conference on Computational Photography, ICCP 2018 IEEE International Conference on Computational Photography, ICCP 2018   1 - 10   2018.5

     More details

    Language:English   Publishing type:Research paper (other academic)  

    Separation of light transport into direct and indirect paths has enabled new visualizations of light in everyday scenes. However, indirect light itself contains a variety of components from subsurface scattering to diffuse and specular interreflections, all of which contribute to complex visual appearance. In this paper, we present a new imaging technique that captures and analyzes these components of indirect light via light transport between epipolar planes of illumination and rays of received light. This plane-to-ray light transport is captured using a rectified projector-camera system where we vary the offset between projector and camera rows (implemented as synchronization delay) as well as the exposure of each camera row. The resulting delay-exposure stack of images can capture live short and long-range indirect light transport, disambiguate subsurface scattering, diffuse and specular interreflections, and distinguish materials according to their subsurface scattering properties.

    DOI: 10.1109/ICCPHOT.2018.8368461

  • Estimating parameters of subsurface scattering using directional dipole model

    Xingji Zeng, Takafumi Iwaguchi, Hiroyuki Kubo, Takuya Funatomi, Yasuhiro Mukaigawa

    16th NICOGRAPH International, NICOInt 2017 Proceedings - 2017 NICOGRAPH International, NICOInt 2017   41 - 48   2017.9

     More details

    Language:English   Publishing type:Research paper (other academic)  

    Acquisition of parameters for the Bidirectional Scattering Surface Reflectance Distribution Function (BSSRDF) has significant meanings in the study of computer graphics and vision research field. In this paper, we present an inverse rendering approach combined with a newly developed BSSRDF model, directional dipole model, for parameter estimation. To validate our algorithm, we estimate parameters from spheres with a wide range of radius in a simulated and real environment, respectively. According to the observations from both simulated and real experiments, we find that surface curvature significantly affects the estimation results.

    DOI: 10.1109/NICOInt.2017.20

  • Light path alignment for computed tomography of scattering material Reviewed

    Takafumi Iwaguchi, Takuya Funatomi, Hiroyuki Kubo, Yasuhiro Mukaigawa

    IPSJ Transactions on Computer Vision and Applications   8 ( 1 )   2016.12

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    We aim to estimate internal slice of the scattering material by Computed Tomography (CT). In the scattering material, the light path is disturbed and is spread. Conventional CT cannot measure the scattering material because they rely on the assumption that rays are straight and parallel. We propose light path alignment to deal with scattering rays. Each path of disturbed scattering light is approximated with a straight line. Then the light path in the object corresponding to a single incident ray is modeled as straight paths spreading from incident point. These spreading paths are aligned to be parallel, so that they can be used directly by conventional reconstruction algorithm.

    DOI: 10.1186/10.1186/s41074-016-0003-2

▼display all

Presentations

  • Underwater Image Enhancement by Transformer-based Diffusion Model with Non-uniform Sampling for Skip Strategy International conference

    @Tang Yi, @Hiroshi Kawasaki, @Takafumi Iwaguchi

    31st ACM International Conference on Multimedia  2023.10 

     More details

    Event date: 2023.10

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Ottawa   Country:Canada  

  • Surface Normal Estimation From Optimized and Distributed Light Sources Using DNN-Based Photometric Stereo International conference

    @Takafumi Iwaguchi, @Hiroshi Kawasaki

    IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)  2023.1 

     More details

    Event date: 2023.1

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Onine   Country:United States  

  • AutoEnhancer: Transformer on U-Net Architecture search for Underwater Image Enhancement International conference

    @Tang Yi, @Takafumi Iwaguchi, @Hiroshi Kawasaki, @Ryusuke Sagawa, @Ryo Furukawa

    Asian Conference on Computer Vision (ACCV)  2022.12 

     More details

    Event date: 2022.12

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Onine   Country:China  

  • Auto-Augmentation with Differentiable Renderer for High-Frequency Shape Recovery International conference

    #Kodai Tokieda, @Takafumi Iwaguchi, @Hiroshi Kawasaki

    26th International Conference on Pattern Recognition (ICPR)  2022.8 

     More details

    Event date: 2022.8

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Onine   Country:Canada  

  • Self-Calibrated Dense 3D Sensor Using Multiple Cross Line Lasers Based on Light Sectioning Method and Visual Odometry International conference

    #Genki Nagamatsu,@Jun Takamatsu,#Takafumi Iwaguchi,@Diego Thomas,@Hiroshi Kawasaki

    2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  2021.9 

     More details

    Event date: 2021.9 - 2021.10

    Language:English   Presentation type:Oral presentation (general)  

    Venue:オンライン   Country:Czech Republic  

  • High-frequency Shape Recovery from Shading by CNN and Domain Adaptation International conference

    #Kodai Tokieda,@Takafumi Iwaguchi,@Hiroshi Kawasaki

    2021 IEEE International Conference on Image Processing (ICIP2021)  2021.9 

     More details

    Event date: 2021.9

    Language:English   Presentation type:Oral presentation (general)  

    Venue:オンライン   Country:United States  

  • Specular Object Reconstruction behind Frosted Glass by Differentiable Rendering International conference

    Iwaguchi T., Kubo H., Kawasaki H.

    IEEE Winter Conference on Applications of Computer Vision, WACV 2024 

     More details

    Presentation type:Oral presentation (general)  

  • 複数の十字平面レーザとDVLを組み合わせた 広域水中SLAMにおける姿勢最適化手法

    #池田貴希, @木原優輝, @Yi Tang, @岩口尭史, @佐藤 啓宏, @Diego Thomas, @川崎洋

    画像の認識・理解シンポジウム(MIRU2023)  2023.7 

     More details

    Event date: 2023.7

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:浜松   Country:Japan  

  • 分光カメラのスキャンラインと平行な照明ラインの距離に応じた表面下散乱光の選択的な獲得

    @矢野海結, @岩口尭史, @川崎洋, @久保尋之

    画像の認識・理解シンポジウム(MIRU2023)  2023.7 

     More details

    Event date: 2023.7

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:浜松   Country:Japan  

  • 裸眼立体視ディスプレイとヘッドトラッキングを用いた自由視点立体映像の個別提示による共同作業システム

    #武中広大, @岩口尭史, @川崎洋

    画像の認識・理解シンポジウム(MIRU2023)  2023.7 

     More details

    Event date: 2023.7

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:浜松   Country:Japan  

  • 近接任意光源によるDNNを用いた照度差ステレオによる法線推定

    @岩口尭史, @川崎洋

    画像の認識・理解シンポジウム(MIRU2023)  2023.7 

     More details

    Event date: 2023.7

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:浜松   Country:Japan  

  • Pseudo Random Modulation on Base-coding of I-ToF to Avoid Multi-ToF Interference

    #Wenbin Luo, @Takafumi Iwaguchi, @Hajime Nagahara, @Ryosuke Sagawa, @Hiroshi Kawasaki

    画像の認識・理解シンポジウム(MIRU2023)  2023.7 

     More details

    Event date: 2023.7

    Language:English   Presentation type:Oral presentation (general)  

    Venue:浜松   Country:Japan  

  • Random Sequence Modulation of Multiple-Gate of Indirect ToF for Handling Multi-ToF-Camera Interference International conference

    #Luo Wenbin, @Takafumi Iwaguchi, @Hiroshi Kawasaki

    5th International Workshop on Image Sensors and Imaging Systems (IWISS)  2022.12 

     More details

    Event date: 2022.12

    Language:English   Presentation type:Oral presentation (general)  

    Venue:静岡   Country:Japan  

  • 微分可能レンダリングを用いたすりガラス越しの鏡面反射物体形状復元

    @Takafumi Iwaguchi, @Hiroyuki Kubo, @Hiroshi Kawasaki

    画像の認識・理解シンポジウム(MIRU2022)  2022.7 

     More details

    Event date: 2022.7

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:姫路   Country:Japan  

  • 音響センサによる自己位置推定と音響ソナーによる 3 次元 計測を用いた自律航行による水中SLAMシステム

    #木原優輝, #池田貴希, @Tang Yi, @岩口尭史, @Diego Thomas, @川崎洋, @高松淳

    画像の認識・理解シンポジウム(MIRU2022)  2022.7 

     More details

    Event date: 2022.7

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:姫路   Country:Japan  

  • Robust ToF measurement under high frequency noise by direct sequence spectrum spreading

    #Wenbin Luo, @Takafumi Iwaguchi, @Hajime Nagahara, @Ryusuke Sagawa, @Hiroshi Kawasaki

    画像の認識・理解シンポジウム(MIRU2022)  2022.7 

     More details

    Event date: 2022.7

    Language:English   Presentation type:Oral presentation (general)  

    Venue:姫路   Country:Japan  

  • 複数の十字平面レーザを用いた3次元計測システムの移動による広範囲計測とカメラ位置姿勢の最適化手法

    #木原優輝, #池田貴希, @川崎洋, @岩口尭史, @Diego Thomas, @高松淳

    画像の認識・理解シンポジウム(MIRU2022)  2022.7 

     More details

    Event date: 2022.7

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  • 建築空間における人の存在が視覚的印象評価に与える影響VR技術による評価研究

    #Kaito Toya, @Diego Thomas, @Yasuko Koga, @Shizuo Kaji, @Hiroyuki Ochiai, @Takafumi Iwaguchi, @Hiroshi Kawasaki

    建築空間における人の存在が視覚的印象評価に与える影響VR技術による評価研究", 画像の認識・理解シンポジウム(MIRU2022)  2022.7 

     More details

    Event date: 2022.7

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:姫路   Country:Japan  

  • 高周波形状復元のための微分可能レンダラーを用いたデータ拡張の最適化

    #時枝康大, @岩口尭史, @川崎洋

    情報処理学会 コンピュータビジョンとイメージメディア研究会 第229回  2022.3 

     More details

    Event date: 2022.6

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:オンライン   Country:Japan  

  • Multi line-lasers ROV for underwater dense 3D shape reconstruction using marker based calibration by DNN

    #Hanbin Wang, #Genki Nagamatsu, @Takafumi Iwaguchi, @Naoki Shirakura, @Jun Takamatsu, @Hiroshi Kawasaki

    画像の認識・理解シンポジウム(MIRU2021)  2021.7 

     More details

    Event date: 2022.6

    Language:English   Presentation type:Oral presentation (general)  

    Venue:オンライン   Country:Japan  

  • Hybrid technique of light sectioning method using multiple-line-lasers and visual SLAM

    #Genki Nagamatsu, #Hanbin Wang, @Takafumi Iwaguchi, @Naoki Shirakura, @Jun Takamatsu, @Hiroshi Kawasaki

    画像の認識・理解シンポジウム(MIRU2021)  2021.7 

     More details

    Event date: 2022.6

    Language:English   Presentation type:Oral presentation (general)  

    Venue:オンライン   Country:Japan  

  • 微分可能レンダリングによるコースティクス画像からの鏡面反射物体の形状復元

    @岩口尭史, @久保尋之, @川崎洋

    情報処理学会 コンピュータビジョンとイメージメディア研究会 第229回  2022.3 

     More details

    Event date: 2022.6

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:オンライン   Country:Japan  

  • Efficient light transport acquisition by coded illumination and robust photometric stereo by dual photography using deep neural network International conference

    @Takafumi Iwaguchi,@Hiroshi Kawasaki

    3rd ICCV Workshop on Physics Based Vision meets Deep Learning (PBDL2021)  2021.10 

     More details

    Event date: 2021.10

    Language:English   Presentation type:Symposium, workshop panel (public)  

    Venue:Onine   Country:Japan  

  • 屈折プロジェクションマッピングのためのオプティカルフローに基づく深層学習による投影パターン補正

    @岩口尭史,@川崎洋

    画像の認識・理解シンポジウム(MIRU2020)  2020.8 

     More details

    Event date: 2021.8

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:オンライン   Country:Japan  

  • 照度差ステレオ法とDual Photographyによる反射光フィルタリングを用いた乱反射物体表面の境界を保存した平滑な法線推定手法

    #木原優輝,#岩口尭史,#川崎洋

    画像の認識・理解シンポジウム(MIRU2021)  2021.7 

     More details

    Event date: 2021.7 - 2022.7

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:オンライン   Country:Japan  

  • 疎な構造化光による疎な3次元復元形状のパターン間の陰影学習による高密度化

    @岩口尭史, #栗田拓弥, #時枝康大, @古川亮, #川崎洋

    画像の認識・理解シンポジウム(MIRU2019)  2019.8 

     More details

    Event date: 2020.6

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:大阪   Country:Japan  

  • パターンの最適化による複数平面への文字情報の同時投影手法

    #平尾勇人, @岩口尭史, @川崎洋

    IPSJ Interaction 2020  2020.3 

     More details

    Event date: 2020.6

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:オンライン   Country:Japan  

  • Densifying sparse shape from sparse structured light measurement by learning shading International conference

    @Takafumi Iwaguchi, #Kodai Tokieda, @Ryo Furukawa, @Hiroshi Kawasaki

    Joint Workshop on Machine Perception and Robotics (MPR2019)  2019.11 

     More details

    Event date: 2020.6

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:Kusatsu   Country:Japan  

  • Development of underwater 3D-reconstruction system using cross-lasers-based scanner attached on underwater ROV International conference

    #Hanbin Wang, @Takafumi Iwaguchi, @Naoki Shirakura, @Jun Takamatsu, and @Hiroshi Kawasaki

    26th International Workshop on Frontiers of Computer Vision (IW-FCV2020)  2020.2 

     More details

    Event date: 2020.2

    Language:English   Presentation type:Symposium, workshop panel (public)  

    Venue:Ibusuki   Country:Japan  

▼display all

Industrial property rights

Patent   Number of applications: 1   Number of registrations: 0
Utility model   Number of applications: 0   Number of registrations: 0
Design   Number of applications: 0   Number of registrations: 0
Trademark   Number of applications: 0   Number of registrations: 0

Professional Memberships

  • Information Processing Society of Japan

  • The Virtual Reality Society of Japan

Academic Activities

  • 若手プログラム 実行委員

    第23回 画像の認識・理解シンポジウム  ( Japan ) 2020.8

     More details

    Type:Competition, symposium, etc. 

Research Projects

  • 仮想現実のための水中物体の質感計測と再現

    2023 - 2025

    JST Strategic Basic Research Program (Ministry of Education, Culture, Sports, Science and Technology)

      More details

    Authorship:Principal investigator  Grant type:Contract research

  • 光の空間的伝播に基づく生体の表面下構造・生体情報の解析

    2020.4 - 2023.3

    九州大学 

      More details

    Authorship:Principal investigator 

    人体の細胞組織の健康度や青果の鮮度など生体組織の質的な生体情報は,病理診断や出荷管理などに必要不可欠である.生命科学や農学の分野では暗視野顕鏡,光コヒーレントトモグラフィあるいは分光式糖度計など様々な手法で生体情報の獲得が行われてきた.物体内における光の伝わり方である光伝播は生体組織の構造・光学的特性に大きく影響を受け変化する.光伝播はこれらの情報の獲得の手がかりになるが,複合的な要因で変化するため解析が困難であり,利用されてこなかった.
    近年の情報科学分野では機械学習の研究が盛んに行われており,複雑な現象から本質的な情報を抽出することが可能になりつつある.本研究では,情報科学分野と生命科学・農学分野の融合により光学的手法により生体組織の表面下構造・生体情報を獲得することを目指す.

  • Development of underwater active 3D scanning techniques and analysis on underwater structures

    Grant number:20H00611  2020 - 2024

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research (A)

    川崎 洋, 巻 俊宏, 小池 賢太郎, 佐藤 啓宏, 唐 毅, 古川 亮, 佐川 立昌, THOMAS DIEGO, 岩口 尭史, 高松 淳

      More details

    Authorship:Coinvestigator(s)  Grant type:Scientific research funding

    水中での形状計測や解析に対する関心が高まっている。本研究では、高密度・高精度に広範囲の水中の3次元形状を計測する手法を開発する。提案手法は、単画像から形状復元可能なため、複数画像間の複雑な屈折の関係や見え方の違いなどを考慮する必要がなく、水中での計測に適している。さらに、視点移動が可能なことから、ROVやAUVなど水中ロボットに搭載し、移動しながら水中の3次元形状を取得し、これらを位置合わせすることで、GPSの使えない水中においても広範囲の形状を計測できる。提案システムにより、海底の詳細な地形図の作成や水中構造物の定点観測による老朽化の解析など、幅広い応用や実社会での利用が期待される。

    CiNii Research

  • Analysis of subsurface structure of biogical tissue using spatial light transport

    Grant number:20K19825  2020 - 2022

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Early-Career Scientists

    Iwaguchi Takafumi

      More details

    Authorship:Principal investigator  Grant type:Scientific research funding

    The purpose of this study is to visualize and reconstruct the internal structure of biological tissue, which has been difficult to analyze due to light scattering, using optical measurement methods based on computer vision and machine learning. Through this study, we proposed a method for measuring the internal structure of scatterers by using patterned light projection using a projector, and an algorithm for estimating the structure from observed images. In addition, we proposed a method for generating an efficient dataset for machine learning, which can learn models that can withstand real-world measurements while using synthetic data.

    CiNii Research

  • 次世代超解像形態観察技術のための光伝播再構成に基づいたニアダイレクトイメージングに関する実証的研究

    2019

    システム情報科学研究院 スタートアップ支援

      More details

    Authorship:Coinvestigator(s)  Grant type:On-campus funds, funds, etc.

Class subject

  • プログラミング演習(P)

    2024.6 - 2024.8   Summer quarter

  • 電気情報工学実験Ⅰ(CM)

    2024.4 - 2024.9   First semester

  • 電気情報工学実験Ⅱ(C)

    2023.10 - 2024.3   Second semester

  • 電気情報工学実験Ⅱ(CM)

    2023.10 - 2024.3   Second semester

  • プログラミング演習(P)

    2023.6 - 2023.8   Summer quarter

  • 電気情報工学実験Ⅰ(CM)

    2023.4 - 2023.9   First semester

  • 電気情報工学実験Ⅱ(C)

    2022.10 - 2023.3   Second semester

  • プログラミング演習(P)

    2022.6 - 2022.8   Summer quarter

  • 電気情報工学実験Ⅰ(C)

    2022.4 - 2022.9   First semester

  • 電気情報工学実験Ⅱ(C)

    2021.10 - 2022.3   Second semester

  • (IUPE)Lab. of Electrical Eng and Computer Science-II(C)

    2021.10 - 2021.12   Fall quarter

  • プログラミング演習(P)

    2021.6 - 2021.8   Summer quarter

  • プログラミング演習

    2020.4 - 2020.9   First semester

  • ソフトウェア実験

    2019.10 - 2020.3   Second semester

▼display all

FD Participation

  • 2024.4   Role:Participation   Title:【シス情FD】Top10%論文/Top10%ジャーナルとは何か: 傾向と対策

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2024.4   Role:Participation   Title:【シス情FD】農学研究院で進めているDX教育について

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2024.4   Role:Participation   Title:【シス情FD】若手教員による研究紹介⑧

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2024.3   Role:Participation   Title:【シス情FD】高度データサイエンティスト育成事業の取り組みについて

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2023.11   Role:Participation   Title:【シス情FD】企業等との共同研究の実施増加に向けて

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2023.10   Role:Participation   Title:【シス情FD】価値創造型半導体人材育成センターについて

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2023.4   Role:Participation   Title:【シス情FD】若手教員による研究紹介⑧

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2023.3   Role:Participation   Title:【シス情FD】独・蘭・台湾での産学連携を垣間見る-Industy 4.0・量子コンピューティング・先端半導体-

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2023.1   Role:Participation   Title:【シス情FD】若手教員による研究紹介⑦

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2022.11   Role:Participation   Title:【工学・シス情】教職員向け知的財産セミナー(FD)

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2022.10   Role:Participation   Title:【シス情FD】若手教員による研究紹介⑥

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2022.7   Role:Participation   Title:【シス情FD】若手教員による研究紹介⑤

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2022.6   Role:Participation   Title:【シス情FD】電子ジャーナル等の今後について

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2022.4   Role:Participation   Title:【シス情FD】第4期中期目標・中期計画等について

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2022.1   Role:Participation   Title:【シス情FD】シス情関連の科学技術に対する国の政策動向(に関する私見)

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2021.12   Role:Participation   Title:【シス情FD】企業出身教員から見た大学

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2021.10   Role:Participation   Title:【シス情FD】熊本高専と九大システム情報との交流・連携に向けて ー 3年半で感じた高専の実像 ー

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2021.9   Role:Participation   Title:博士後期課程の充足率向上に向けて

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2021.6   Role:Participation   Title:若手教員による研究紹介 及び 科研取得のポイントについて ①

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2020.10   Role:Participation   Title:2020年度 ユニバーシティ・デザイン・ワークショップの報告

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2020.9   Role:Participation   Title:電気情報工学科総合型選抜(AO入試)について

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2019.10   Role:Participation   Title:電子ジャーナルの現状と今後の動向に関する説明会

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2019.6   Role:Participation   Title:8大学情報系研究科長会議の報告

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2019.4   Role:Participation   Title:平成31年度 第1回全学FD(新任教員の研修)

    Organizer:University-wide

▼display all