Updated on 2025/06/30

Information

 

写真a

 
NAKASHIMA KAZUTO
 
Organization
Faculty of Information Science and Electrical Engineering Associate Professor
School of Engineering (Concurrent)
Graduate School of Engineering Department of Mechanical Engineering(Concurrent)
Title
Associate Professor
Contact information
メールアドレス
Homepage

Research Areas

  • Informatics / Intelligent robotics

  • Informatics / Perceptual information processing

Degree

  • Ph.D.

Research History

  • Kyushu University Faculty of Information Science and Electrical Engineering Associate Professor 

    2024.11 - Present

      More details

    Country:Japan

  • Kyushu University Faculty of Information Science and Electrical Engineering Assistant Professor 

    2023.3 - 2024.10

      More details

    Country:Japan

    researchmap

  • Kyushu University Faculty of Information Science and Electrical Engineering Academic Researcher 

    2021.4 - 2023.2

      More details

Research Interests・Research Keywords

  • Research theme: Restoration and domain adaptation on 3D LiDAR data

    Keyword: 3D LiDAR, Deep Learning

    Research period: 2020.4

  • Research theme: Outdoor scene understanding using 3D LiDAR sensors

    Keyword: 3D LiDAR, Deep Learning

    Research period: 2016.10 - 2020.12

  • Research theme: Visual lifelogging for human-robot symbiosis space

    Keyword: human-robot symbiosis space, visual lifelogging, deep learning

    Research period: 2016.4 - 2020.12

Awards

  • System Integration Award for Outstanding Young Researchers

    2024.11   System Integration Division, The Society of Instrument and Control Engineers (SICE)   拡散モデルを用いたリサンプリングによる3D LiDARデータの欠損補完 (第29回ロボティクスシンポジア)

    Kazuto Nakashima, Ryo Kurazume

     More details

  • Oral Contribution Award

    2019.11   Joint Workshop on Machine Perception and Robotics (MPR)  

  • Best Poster Presentation Award

    2018.10   Joint Workshop on Machine Perception and Robotics (MPR)  

  • 学生奨励賞

    2017.8   画像の認識・理解シンポジウム (MIRU)  

  • Best Service Robotics Paper Award Finalist

    2017.5   IEEE International Conference on Robotics and Automation (ICRA)  

Papers

  • Learning Viewpoint-Invariant Features for LiDAR-Based Gait Recognition Reviewed International journal

    Jeongho Ahn, Kazuto Nakashima, Koki Yoshino, Yumi Iwashita, Ryo Kurazume

    IEEE Access   2023.11

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

  • Lifelogging Caption Generation via Fourth-Person Vision in a Human-Robot Symbiotic Environment Reviewed International journal

    Kazuto Nakashima, Yumi Iwashita, Ryo Kurazume

    ROBOMECH Journal   2020.9

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

  • Virtual IR Sensing for Planetary Rovers: Improved Terrain Classification and Thermal Inertia Estimation Reviewed International journal

    Yumi Iwashita, Kazuto Nakashima, Joseph Gatto, Shoya Higa, Norris Khoo, Ryo Kurazume, Adrian Stoica

    IEEE Robotics and Automation Letters (RA-L)   2020.8

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

  • Fukuoka Datasets for Place Categorization Reviewed International journal

    Oscar Martinez Mozos, Kazuto Nakashima, Hojung Jung, Yumi Iwashita, Ryo Kurazume

    International Journal of Robotics Research (IJRR)   2019.3

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

  • Learning Geometric and Photometric Features from Panoramic LiDAR Scans for Outdoor Place Categorization Reviewed International journal

    Kazuto Nakashima, Hojung Jung, Yuki Oto, Yumi Iwashita, Ryo Kurazume, Oscar Martinez Mozos

    Advanced Robotics (AR)   2018.7

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

  • RGB-Based Gait Recognition With Disentangled Gait Feature Swapping.

    Koki Yoshino, Kazuto Nakashima, Jeongho Ahn, Yumi Iwashita, Ryo Kurazume

    IEEE Access   12   115515 - 115531   2024   ISSN:2169-3536

     More details

    Publishing type:Research paper (scientific journal)   Publisher:IEEE Access  

    Gait recognition enables the non-contact identification of individuals from a distance based on their walking patterns and body shapes. For vision-based gait recognition, covariates (e.g., clothing, baggage and background) can negatively impact identification. As a result, many existing studies extract gait features from silhouettes or skeletal information obtained through preprocessing, rather than directly from RGB image sequences. In contrast to preprocessing which relies on the fitting accuracy of models trained on different tasks, disentangled representation learning (DRL) is drawing attention as a method for directly extracting gait features from RGB image sequences. However, DRL learns to extract features of the target attribute from the differences among multiple inputs with various attributes, which means its separation performance depends on the variation and amount of the training data. In this study, aiming to enhance the variation and quantity of each subject's videos, we propose a novel data augmentation pipeline by feature swapping for RGB-based gait recognition. To expand the variety of training data, features of posture and covariates separated through DRL are paired with features extracted from different individuals, which enables the generation of images of subjects with new attributes. Dynamic gait features are extracted through temporal modeling from pose features of each frame, not only from real images but also from generated ones. The experiments demonstrate that the proposed pipeline increases both the quality of generated images and the identification accuracy. The proposed method also outperforms the RGB-based state-of-the-art method in most settings.

    DOI: 10.1109/ACCESS.2024.3445415

    Web of Science

    Scopus

    researchmap

  • Development of a Retrofit Backhoe Teleoperation System Using Cat Command.

    Koshi Shibata, Yuki Nishiura, Yusuke Tamaishi, Kohei Matsumoto, Kazuto Nakashima, Ryo Kurazume

    SII   1486 - 1491   2024   ISBN:9798350312072

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    Most existing retrofit remote-control systems for backhoes are large, hard-to-install, and expensive. Therefore, we propose an easy-to-install and inexpensive teleoperation system. The proposed system comprised remote-control and sensing systems. The remote-control system retrofits robot arm-based devices to 'Cat Command', a compact embedded teleoperation system with a limited communication range, and controls these devices via a 5G commercial network to realize control from a remote office. Because this system does not require any additional modifications to the embedded control unit in the cockpit, the operator can continue working in the cockpit even if the backhoe is remotely controlled. The system enables the remote control of various devices from an extremely long distance by changing the joint parts between the robot arm and the embedded remote-control device. The sensing system estimates the posture and position of the backhoe by attaching original sensing devices to the backhoe. In addition, a 360 camera was installed in the cockpit to transmit work images from the construction site to a remote office in real time. The sensing device was smaller and lighter than conventional devices. We confirmed that the proposed system can be used to operate a construction site backhoe from a remote office, and that the system can be used to excavate soil using an actual backhoe.

    DOI: 10.1109/SII58957.2024.10417625

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/sii/sii2024.html#ShibataNTMNK24

  • LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models.

    Kazuto Nakashima, Ryo Kurazume

    ICRA   14724 - 14731   2024   ISSN:10504729 ISBN:9798350384574 eISSN:2577-087X

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots, such as scalable simulation, scene manipulation, and sparse-to-dense completion of LiDAR point clouds. While existing approaches have demonstrated the feasibility of image-based LiDAR data generation using deep generative models, they still struggle with fidelity and training stability. In this work, we present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds based on the image representation of range and reflectance intensity. Our method is built upon denoising diffusion probabilistic models (DDPMs), which have shown impressive results among generative model frameworks in recent years. To effectively train DDPMs in the LiDAR domain, we first conduct an in-depth analysis of data representation, loss functions, and spatial inductive biases. Leveraging our R2DM model, we also introduce a flexible LiDAR completion pipeline based on the powerful capabilities of DDPMs. We demonstrate that our method surpasses existing methods in generating tasks on the KITTI-360 and KITTI-Raw datasets, as well as in the completion task on the KITTI-360 dataset. Our project page can be found at https://kazuto1011.github.io/r2dm.

    DOI: 10.1109/ICRA57147.2024.10611480

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/icra/icra2024.html#NakashimaK24

  • Fast LiDAR Data Generation with Rectified Flows.

    Kazuto Nakashima, Xiaowen Liu, Tomoya Miyawaki, Yumi Iwashita, Ryo Kurazume

    CoRR   abs/2412.02241   2024

     More details

    Building LiDAR generative models holds promise as powerful data priors for
    restoration, scene manipulation, and scalable simulation in autonomous mobile
    robots. In recent years, approaches using diffusion models have emerged,
    significantly improving training stability and generation quality. Despite
    their success, diffusion models require numerous iterations of running neural
    networks to generate high-quality samples, making the increasing computational
    cost a potential barrier for robotics applications. To address this challenge,
    this paper presents R2Flow, a fast and high-fidelity generative model for LiDAR
    data. Our method is based on rectified flows that learn straight trajectories,
    simulating data generation with significantly fewer sampling steps compared to
    diffusion models. We also propose an efficient Transformer-based model
    architecture for processing the image representation of LiDAR range and
    reflectance measurements. Our experiments on unconditional LiDAR data
    generation using the KITTI-360 dataset demonstrate the effectiveness of our
    approach in terms of both efficiency and quality.

    DOI: 10.48550/arXiv.2412.02241

    arXiv

    researchmap

  • Analysis of Force Applied to Horizontal and Vertical Handrails with Impaired Motor Function.

    Ryoya Kihara, Qi An, Kensuke Takita, Shu Ishiguro, Kazuto Nakashima, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration(SII)   1 - 6   2023   ISSN:2474-2317 ISBN:979-8-3503-9868-7

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    People depend on medical equipment to support their movements when their motor function declines. Our previous study developed a method to estimate motor function from the force applied to a vertical handrail while standing. However, the effect of the handrail direction on movement remains unclear. Additionally, the force applied to the handrail and floor reaction forces on the buttocks and feet may also change with a decline in motor function. Here, this study constructed a system with force plates and handles in both the horizontal and vertical directions to measure the forces applied to the handrails, buttocks, and feet. Furthermore, the change in accuracy of the estimation of motor function, depending on the direction of the handrails and input information, was investigated. In the experiment, healthy participants stood up using a handrail with unrestricted movement and while wearing elderly experience kits that artificially impaired their motor function. The results showed that people exert more downward force on horizontal handrails than on vertical handrails. However, people rely on the vertical handrail for a longer period of time to stabilize anterior-posterior movement. These results indicate that different directions of handrails cause different strategies of the standing-up motion. Additionally, the accuracy of the estimation of motor function improved when the horizontal handrail was used rather than the vertical handrail. This suggests that the classification accuracy could be improved by using different handrail directions, depending on the subject's condition and standing-up motion.

    DOI: 10.1109/SII55687.2023.10039452

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/sii/sii2023.html#KiharaATINK23

  • Evaluation of ground stiffness using multiple accelerometers on the ground during compaction by vibratory rollers

    Tamaishi Y., Fukuda K., Nakashima K., Maeda R., Matsumoto K., Kurazume R.

    Proceedings of the International Symposium on Automation and Robotics in Construction   262 - 269   2023   ISBN:9780645832204

     More details

    Publisher:Proceedings of the International Symposium on Automation and Robotics in Construction  

    Soil compaction is one of the most important basic elements in construction work because it directly affects the quality of structures. Compaction work using vibratory rollers is generally applied to strengthen ground stiffness, and the method that focuses on the number of compaction cycles is widely used to manage the ground stiffness by vibratory rollers. In contrast to this method, the continuous compaction control (CCC) using accelerometers installed on the vibratory rollers has been proposed as a quantitative evaluation method more suited to actual ground conditions. This method quantifies the distortion rate of the acceleration waveform of the vibratory roller. However, this method based on acceleration response has problems in measurement discrimination accuracy and sensor durability because the accelerometer is installed on the vibration roller, which is the source of vibration. In this paper, we propose a new ground stiffness evaluation method using multiple accelerometers installed on the ground surface. The proposed method measures the acceleration response during compaction work by vibratory rollers using multiple accelerometers on the ground surface. Experiments show the proposed method has a higher discrimination than the conventional methods.

    DOI: 10.22260/ISARC2023/0037

    Scopus

  • Generative Range Imaging for Learning Scene Priors of 3D LiDAR Data.

    Kazuto Nakashima, Yumi Iwashita, Ryo Kurazume

    IEEE/CVF Winter Conference on Applications of Computer Vision(WACV)   1256 - 1266   2023   ISSN:2472-6737 ISBN:978-1-6654-9346-8

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    3D LiDAR sensors are indispensable for the robust vision of autonomous mobile robots. However, deploying LiDAR-based perception algorithms often fails due to a domain gap from the training environment, such as inconsistent angular resolution and missing properties. Existing studies have tackled the issue by learning inter-domain mapping, while the transferability is constrained by the training configuration and the training is susceptible to peculiar lossy noises called ray-drop. To address the issue, this paper proposes a generative model of LiDAR range images applicable to the data-level domain transfer. Motivated by the fact that LiDAR measurement is based on point-by-point range imaging, we train an implicit image representation-based generative adversarial networks along with a differentiable ray-drop effect. We demonstrate the fidelity and diversity of our model in comparison with the point-based and image-based state-of-the-art generative models. We also showcase upsampling and restoration applications. Furthermore, we introduce a Sim2Real application for LiDAR semantic segmentation. We demonstrate that our method is effective as a realistic ray-drop simulator and outperforms state-of-the-art methods.

    DOI: 10.1109/WACV56688.2023.00131

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/wacv/wacv2023.html#NakashimaIK23

  • Learning Viewpoint-Invariant Features for LiDAR-Based Gait Recognition.

    Jeongho Ahn, Kazuto Nakashima, Koki Yoshino, Yumi Iwashita, Ryo Kurazume

    IEEE Access   11   129749 - 129762   2023   ISSN:2169-3536

     More details

    Publishing type:Research paper (scientific journal)   Publisher:IEEE Access  

    Gait recognition is a biometric identification method based on individual walking patterns. This modality is applied in a wide range of applications, such as criminal investigations and identification systems, since it can be performed at a long distance and requires no cooperation of interests. In general, cameras are used for gait recognition systems, and previous studies have utilized depth information captured by RGB-D cameras, such as Microsoft Kinect. In recent years, multi-layer LiDAR sensors, which can obtain range images of a target at a range of over 100 m in real time, have attracted significant attention in the field of autonomous mobile robots and self-driving vehicles. Compared with general cameras, LiDAR sensors have rarely been used for biometrics due to the low point cloud densities captured at long distances. In this study, we focus on improving the robustness of gait recognition using LiDAR sensors under confounding conditions, specifically addressing the challenges posed by viewing angles and measurement distances. First, our recognition model employs a two-scale spatial resolution to enhance immunity to varying point cloud densities. In addition, this method learns the gait features from two invariant viewpoints (i.e., left-side and back views) generated by estimating the walking direction. Furthermore, we propose a novel attention block that adaptively recalibrates channel-wise weights to fuse the features from the aforementioned resolutions and viewpoints. Comprehensive experiments conducted on our dataset demonstrate that our model outperforms existing methods, particularly in cross-view, cross-distance challenges, and practical scenarios.

    DOI: 10.1109/ACCESS.2023.3333037

    Web of Science

    Scopus

    researchmap

  • Generative Range Imaging for Learning Scene Priors of 3D LiDAR Data

    Nakashima Kazuto, Iwashita Yumi, Kurazume Ryo

    IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)   1256 - 1266   2023

     More details

    Language:English   Publisher:Institute of Electrical and Electronics Engineers (IEEE)  

    3D LiDAR sensors are indispensable for the robust vision of autonomous mobile robots. However, deploying LiDAR-based perception algorithms often fails due to a domain gap from the training environment, such as inconsistent angular resolution and missing properties. Existing studies have tackled the issue by learning inter-domain mapping, while the transferability is constrained by the training configuration and the training is susceptible to peculiar lossy noises called ray-drop. To address the issue, this paper proposes a generative model of LiDAR range images applicable to the data-level domain transfer. Motivated by the fact that LiDAR measurement is based on point-by-point range imaging, we train an implicit image representation-based generative adversarial networks along with a differentiable ray-drop effect. We demonstrate the fidelity and diversity of our model in comparison with the point-based and image-based state-of-the-art generative models. We also showcase upsampling and restoration applications. Furthermore, we introduce a Sim2Real application for LiDAR semantic segmentation. We demonstrate that our method is effective as a realistic ray-drop simulator and outperforms state-of-the-art methods.

    CiNii Research

  • Development of Distributed Sensor Pods for Evaluation of Compacted Ground

    FUKUDA Kentaro, NAKASHIMA Kazuto, KURAZUME Ryo

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)   2022 ( 0 )   1A1-E04   2022   eISSN:24243124

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    <p>In this study, we develop a sensor terminal with multiple and various sensors named sensor pod, which collects various environmental information at a construction site. The sensor pod is equipped with a 3D-LiDAR and a vibration sensor, which can be used to predict the surrounding hazards and evaluate the ground stiffness. In this paper, we introduce a method of evaluating ground stiffness using the waveform distortion of multi-point synchronized vibration data obtained by the distributed sensor pods.</p>

    DOI: 10.1299/jsmermd.2022.1a1-e04

    CiNii Research

  • 2V-Gait: Gait Recognition using 3D LiDAR Robust to Changes in Walking Direction and Measurement Distance.

    Jeongho Ahn, Kazuto Nakashima, Koki Yoshino, Yumi Iwashita, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration(SII)   602 - 607   2022   ISSN:2474-2317 ISBN:978-1-6654-4540-5

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    Gait recognition, which is a biometric identifier for individual walking patterns, is utilized in many applications, such as criminal investigation and identification systems, because it can be applied at a long distance and requires no explicit cooperation of the subjects. In general, cameras are used for gait recognition, and several methods in previous studies have used depth information captured by RGB-D cameras. However, RGB-D cameras are limited in terms of their measurement distance and are difficult to access outdoors. In recent years, real-time multi-layer 3D LiDAR, which can obtain 3D range images of a target at ranges of over 100 m, has attracted significant attention for use in autonomous mobile robots, serving as eyes for obstacles detection and navigation. Compared with cameras, such 3D LiDAR has rarely been used for biometrics owing to its low spatial resolution. However, considering the unique characteristics of 3D LiDAR, such as the robustness of the illumination conditions, long measurement distances, and wide-angle scanning, the approach has the potential to be applied outdoors as a biometric identifier. The present paper describes a gait recognition system, called 2V-Gait, which is robust to variations in the walking direction of a subject and the distance measured from the 3D LiDAR. To improve the performance of gait recognition, we leverage the unique characteristics of 3D LiDAR, which are not included in regular cameras. Extensive experiments on our dataset show the effectiveness of the proposed approach.

    DOI: 10.1109/SII52469.2022.9708899

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/sii/sii2022.html#AhnNYIK22

  • Gait Recognition using Identity-Aware Adversarial Data Augmentation.

    Koki Yoshino, Kazuto Nakashima, Jeongho Ahn, Yumi Iwashita, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration(SII)   596 - 601   2022   ISSN:2474-2317 ISBN:978-1-6654-4540-5

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    Gait recognition is a non-contact person identification method that utilizes cameras installed at a distance. However, gait images contain person-agnostic elements (covariates) such as clothing, and the removal of covariates is important for identification with high performance. Disentanglement representation learning, which separates gait-dependent information such as posture from covariates by unsupervised learning, has been attracting attention as a method to remove covariates. However, because the amount of gait data is negligible compared to other computer vision tasks, such as image recognition, the separation performance of existing methods is insufficient. In this study, we propose a gait recognition method to improve the separation performance, which augments the training data by adversarial generation based on identity features, separated by disentanglement representation learning. The proposed method first separates gait-dependent features (pose features) and appearance-related covariate features (style features) from gait videos based on disentanglement representation learning. Then, synthesized gait images are generated by exchanging pose features between gait images of the person under different walking conditions, followed by adding them to the training data. The experiments indicate that our method can improve the separation performance, and generate high-quality gait images that are effective for data augmentation.

    DOI: 10.1109/SII52469.2022.9708776

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/sii/sii2022.html#YoshinoNAIK22

  • Understanding Humanitude Care for Sit-to-stand Motion by Wearable Sensors.

    Qi An, Akito Tanaka, Kazuto Nakashima, Hidenobu Sumioka, Masahiro Shiomi, Ryo Kurazume

    IEEE International Conference on Systems, Man, and Cybernetics(SMC)   2022-October   1874 - 1879   2022   ISSN:1062922X ISBN:9781665452588

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    Assisting patients with dementia is a significant social issue. Currently, to assist patients with dementia, a multimodal care technique called Humanitude is gaining popularity. In Humanitude, the patients are assisted through various techniques to stand up independently by utilizing their motor functions as much as possible. Humanitude care techniques encourage caregivers to increase the area of contact with patients during the sit-to-stand motion. However, Humanitude care techniques are not accurately performed by novice caregivers. Therefore, in this study, a smock-type wearable sensor was developed to measure the proximity between caregivers and care recipients during sit-to-stand motion assistance. A measurement experiment was conducted to evaluate the proximity differences between Humanitude care and simulated novice care. In addition, the effects of different care techniques on the center of mass (CoM) trajectory and muscle activity of the care recipients were investigated. The results showed that the caregivers tend to bring their top and middle trunk closer in Humanitude care compared with novice simulated care. Furthermore, it was observed that the CoM trajectory and muscle activity under Humanitude care were similar to those observed when the care recipient stands up independently. These results validate the effectiveness of Humanitude care and provide useful information for teaching techniques in Humanitude.

    DOI: 10.1109/SMC53654.2022.9945156

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/smc/smc2022.html#AnTNSSK22

  • Development of Retrofit Type Backhoe Remote Control System

    NISHIURA Yuki, NAKASHIMA Kazuto, KURAZUME Ryo

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)   2022 ( 0 )   1P1-C07   2022   eISSN:24243124

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    <p>This paper presents a retrofit backhoe remote control system that is inexpensive, compact, and easy to install. The system consists of a remote control system using a teleoperation system embedded by a construction machinery manufacturer and a small robot arm, and a remote sensing system using a multi-core microcomputer.</p>

    DOI: 10.1299/jsmermd.2022.1p1-c07

    CiNii Research

▼display all

Presentations

  • Leaning to Drop Points for LiDAR Scan Synthesis International conference

    Kazuto Nakashima, Ryo Kurazume

    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  2021.9 

     More details

    Language:English  

    Country:Czech Republic  

  • Generative Range Imaging for Learning Scene Priors of 3D LiDAR Data International conference

    Kazuto Nakashima, Yumi Iwashita, Ryo Kurazume

    IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)  2023.1 

     More details

    Language:English  

    Country:United States  

    Repository Public URL: https://hdl.handle.net/2324/7232999

  • LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models International conference

    Kazuto Nakashima, Ryo Kurazume

    IEEE International Conference on Robotics and Automation (ICRA)  2024.5 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Japan  

  • Fourth-Person Sensing for Pro-active Services International conference

    Yumi Iwashita, Kazuto Nakashima, Yoonseok Pyo, Ryo Kurazume

    International Conference on Emerging Security Technologies (EST)  2014.9 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Spain  

  • Fourth-Person Sensing for a Service Robot International conference

    Kazuto Nakashima, Yumi Iwashita, Pyo Yoonseok, Asamichi Takamine, Ryo Kurazume

    IEEE Sensors  2015.11 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Korea, Republic of  

  • Automatic Houseware Registration System for Informationally-Structured Environment International conference

    Kazuto Nakashima, Julien Girard, Yumi Iwashita, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration (SII)  2016.12 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Japan  

  • Feasibility Study of IoRT Platform "Big Sensor Box" International conference

    Ryo Kurazume, Yoonseok Pyo, Kazuto Nakashima, Akihiro Kawamura, Tokuo Tsuji

    IEEE International Conference on Robotics and Automation (ICRA)  2017.5 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Singapore  

  • Previewed Reality: Near-Future Perception System International conference

    Yuta Horikawa, Asuka Egashira, Kazuto Nakashima, Akihiro Kawamura, Ryo Kurazume

    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  2017.9 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Canada  

  • Recognizing Outdoor Scenes by Convolutional Features of Omni-Directional LiDAR Scans International conference

    Kazuto Nakashima, Seungwoo Nham, Hojung Jung, Yumi Iwashita, Ryo Kurazume, Oscar M Mozos

    IEEE/SICE International Symposium on System Integration (SII)  2017.12 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Taiwan, Province of China  

  • Virtual Sensors Determined Through Machine Learning International conference

    Yumi Iwashita, Adrian Stoica, Kazuto Nakashima, Ryo Kurazume, Jim Torresen

    World Automation Congress (WAC)  2018.6 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:United States  

  • Fourth-person Captioning: Describing Daily Events by Uni-supervised and Tri-regularized Training International conference

    Kazuto Nakashima, Yumi Iwashita, Akihiro Kawamura, Ryo Kurazume

    IEEE International Conference on Systems, Man, and Cybernetics (SMC)  2018.10 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Japan  

  • TU-Net and TDeepLab: Deep Learning-based Terrain Classification Robust to Illumination Changes, Combining Visible and Thermal Imagery International conference

    Yumi Iwashita, Kazuto Nakashima, Adrian Stoica, Ryo Kurazume

    IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)  2019.3 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:United States  

  • MU-Net: Deep Learning-Based Thermal IR Image Estimation From RGB Image International conference

    Yumi Iwashita, Kazuto Nakashima, Sir Rafol, Adrian Stoica, Ryo Kurazume

    IEEE/CVF Computer Vision and Pattern Recognition Conference Workshops (CVPRW)  2019.6 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:United States  

  • 2V-Gait: Gait Recognition Using 3D LiDAR Robust to Changes in Walking Direction and Measurement Distance International conference

    Jeongho Ahn, Kazuto Nakashima, Koki Yoshino, Yumi Iwashita, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration (SII)  2022.1 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Norway  

  • Gait Recognition using Identity-Aware Adversarial Data Augmentation International conference

    Koki Yoshino, Kazuto Nakashima, Jeongho Ahn, Yumi Iwashita, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration (SII)  2022.1 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Norway  

  • Understanding Humanitude Care for Sit-To-Stand Motion by Wearable Sensors International conference

    Qi An, Akito Tanaka, Kazuto Nakashima, Hidenobu Sumioka, Masahiro Shiomi, Ryo Kurazume

    IEEE International Conference on Systems, Man, and Cybernetics (SMC)  2022.10 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Czech Republic  

  • Analysis of Force Applied to Horizontal and Vertical Handrails with Impaired Motor Function International conference

    Ryoya Kihara, Qi An, Kensuke Takita, Shu Ishiguro, Kazuto Nakashima, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration (SII)  2023.1 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:United States  

  • Evaluation of Ground Stiffness using Multiple Accelerometers on the Ground during Compaction by Vibratory Rollers International conference

    Yusuke Tamaishi, Kentaro Fukuda, Kazuto Nakashima, Ryuichi Maeda, Kohei Matsumoto, Ryo Kurazume

    International Symposium on Automation and Robotics in Construction (ISARC)  2023.7 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:India  

  • ROS2-TMS for Construction: CPS platform for earthwork sites International conference

    Ryuichi Maeda, Kohei Matsumoto, Tomoya Kouno, Tomoya Itsuka, Kazuto Nakashima, Yusuke Tamaishi, Ryo Kurazume

    International Symposium on Artificial Life and Robotics (AROB)  2024.1 

     More details

    Language:English   Presentation type:Oral presentation (invited, special)  

    Country:Japan  

  • Development of a Retrofit Backhoe Teleoperation System using Cat Command International conference

    Koshi Shibata, Yuki Nishiura, Yusuke Tamaishi, Kohei Matsumoto, Kazuto Nakashima, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration (SII)  2024.1 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    Country:Viet Nam  

▼display all

MISC

  • Fast LiDAR Upsampling using Conditional Diffusion Models

    Sander Elias Magnussen Helgesen, Kazuto Nakashima, Jim Tørresen, Ryo Kurazume

    CoRR   abs/2405.04889   2024.5

     More details

    The search for refining 3D LiDAR data has attracted growing interest
    motivated by recent techniques such as supervised learning or generative
    model-based methods. Existing approaches have shown the possibilities for using
    diffusion models to generate refined LiDAR data with high fidelity, although
    the performance and speed of such methods have been limited. These limitations
    make it difficult to execute in real-time, causing the approaches to struggle
    in real-world tasks such as autonomous navigation and human-robot interaction.
    In this work, we introduce a novel approach based on conditional diffusion
    models for fast and high-quality sparse-to-dense upsampling of 3D scene point
    clouds through an image representation. Our method employs denoising diffusion
    probabilistic models trained with conditional inpainting masks, which have been
    shown to give high performance on image completion tasks. We introduce a series
    of experiments, including multiple datasets, sampling steps, and conditional
    masks. This paper illustrates that our method outperforms the baselines in
    sampling speed and quality on upsampling tasks using the KITTI-360 dataset.
    Furthermore, we illustrate the generalization ability of our approach by
    simultaneously training on real-world and synthetic datasets, introducing
    variance in quality and environments.

    DOI: 10.48550/arXiv.2405.04889

    arXiv

    researchmap

    Other Link: http://arxiv.org/pdf/2405.04889v2

  • Gait Sequence Upsampling using Diffusion Models for Single LiDAR Sensors.

    Jeongho Ahn, Kazuto Nakashima, Koki Yoshino, Yumi Iwashita, Ryo Kurazume

    CoRR   abs/2410.08680   2024

     More details

    Recently, 3D LiDAR has emerged as a promising technique in the field of
    gait-based person identification, serving as an alternative to traditional RGB
    cameras, due to its robustness under varying lighting conditions and its
    ability to capture 3D geometric information. However, long capture distances or
    the use of low-cost LiDAR sensors often result in sparse human point clouds,
    leading to a decline in identification performance. To address these
    challenges, we propose a sparse-to-dense upsampling model for pedestrian point
    clouds in LiDAR-based gait recognition, named LidarGSU, which is designed to
    improve the generalization capability of existing identification models. Our
    method utilizes diffusion probabilistic models (DPMs), which have shown high
    fidelity in generative tasks such as image completion. In this work, we
    leverage DPMs on sparse sequential pedestrian point clouds as conditional masks
    in a video-to-video translation approach, applied in an inpainting manner. We
    conducted extensive experiments on the SUSTeck1K dataset to evaluate the
    generative quality and recognition performance of the proposed method.
    Furthermore, we demonstrate the applicability of our upsampling model using a
    real-world dataset, captured with a low-resolution sensor across varying
    measurement distances.

    DOI: 10.48550/arXiv.2410.08680

    arXiv

    researchmap

  • Sim2Real LiDAR Segmentation with Synthetic Raydrop Noise

    宮脇智也, 中嶋一斗, LIU Xiaowen, 岩下友美, 倉爪亮

    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)   2024   2024   ISSN:2424-3124

  • 条件付きフローマッチングによるLiDARデータ生成モデルのサンプリング高速化

    中嶋一斗, 劉瀟文, 宮脇智也, 岩下友美, 倉爪亮

    日本ロボット学会学術講演会予稿集(CD-ROM)   42nd   2024

  • LiDAR Completion by Resampling with Diffusion Models

    中嶋一斗, 倉爪亮

    ロボティクスシンポジア予稿集   29th (CD-ROM)   2024   ISSN:1881-7300

  • LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models.

    Kazuto Nakashima, Ryo Kurazume

    CoRR   abs/2309.09256   2023.9

     More details

    Generative modeling of 3D LiDAR data is an emerging task with promising
    applications for autonomous mobile robots, such as scalable simulation, scene
    manipulation, and sparse-to-dense completion of LiDAR point clouds. Existing
    approaches have shown the feasibility of image-based LiDAR data generation
    using deep generative models while still struggling with the fidelity of
    generated data and training instability. In this work, we present R2DM, a novel
    generative model for LiDAR data that can generate diverse and high-fidelity 3D
    scene point clouds based on the image representation of range and reflectance
    intensity. Our method is based on the denoising diffusion probabilistic models
    (DDPMs), which have demonstrated impressive results among generative model
    frameworks and have been significantly progressing in recent years. To
    effectively train DDPMs on the LiDAR domain, we first conduct an in-depth
    analysis regarding data representation, training objective, and spatial
    inductive bias. Based on our designed model R2DM, we also introduce a flexible
    LiDAR completion pipeline using the powerful properties of DDPMs. We
    demonstrate that our method outperforms the baselines on the generation task of
    KITTI-360 and KITTI-Raw datasets and the upsampling task of KITTI-360 datasets.
    Our code and pre-trained weights will be available at
    https://github.com/kazuto1011/r2dm.

    DOI: 10.48550/arXiv.2309.09256

    arXiv

    researchmap

    Other Link: http://arxiv.org/pdf/2309.09256v1

  • Development of Distributed Sensor Pods for Evaluation of Ground Stiffness and Safety Management at Civil Engineering Fields

    福田健太郎, 中嶋一斗, 玉石祐介, 玉石祐介, 前田龍一, 松本耕平, 倉爪亮

    ロボティクスシンポジア予稿集   28th   2023   ISSN:1881-7300

  • Development of Deep Generative Models for LiDAR Range, Reflectance, and Raydrop Distributions

    LIU Xiaowen, 中嶋一斗, 宮脇智也, 岩下友美, 倉爪亮

    計測自動制御学会システムインテグレーション部門講演会(CD-ROM)   24th   2023

  • ROS2-TMS for Construction: CPS platform for earthwork sites-Experiment of CPS visualization using 360-degree camera stream-

    前田龍一, 高野智也, 松本耕平, 中嶋一斗, 倉爪亮

    計測自動制御学会システムインテグレーション部門講演会(CD-ROM)   24th   2023

  • Evaluating physical fitness measurement data of elderly people using force applied to the handrail, buttock, and feet during sit-to-stand motion

    木原諒也, AN Qi, 滝田謙介, 石黒周, 中山和洋, 三好敢太, 中嶋一斗, 倉爪亮

    計測自動制御学会システムインテグレーション部門講演会(CD-ROM)   24th   2023

  • 欠損確率の再現によるLiDAR Sim2Realの検討

    宮脇智也, 中嶋一斗, LIU Xiaowen, 岩下友美, 倉爪亮

    日本ロボット学会学術講演会予稿集(CD-ROM)   41st   2023

  • Development of Petit-Sensor Pods for Monitoring Civil Engineering Sites

    高野智也, 松本耕平, 中嶋一斗, 倉爪亮

    計測自動制御学会システムインテグレーション部門講演会(CD-ROM)   24th   2023

  • Generative Range Imaging for Learning Scene Priors of 3D LiDAR Data.

    Kazuto Nakashima, Yumi Iwashita, Ryo Kurazume

    CoRR   abs/2210.11750   2022.10

     More details

    3D LiDAR sensors are indispensable for the robust vision of autonomous mobile
    robots. However, deploying LiDAR-based perception algorithms often fails due to
    a domain gap from the training environment, such as inconsistent angular
    resolution and missing properties. Existing studies have tackled the issue by
    learning inter-domain mapping, while the transferability is constrained by the
    training configuration and the training is susceptible to peculiar lossy noises
    called ray-drop. To address the issue, this paper proposes a generative model
    of LiDAR range images applicable to the data-level domain transfer. Motivated
    by the fact that LiDAR measurement is based on point-by-point range imaging, we
    train an implicit image representation-based generative adversarial networks
    along with a differentiable ray-drop effect. We demonstrate the fidelity and
    diversity of our model in comparison with the point-based and image-based
    state-of-the-art generative models. We also showcase upsampling and restoration
    applications. Furthermore, we introduce a Sim2Real application for LiDAR
    semantic segmentation. We demonstrate that our method is effective as a
    realistic ray-drop simulator and outperforms state-of-the-art methods.

    DOI: 10.48550/arXiv.2210.11750

    arXiv

    researchmap

    Other Link: http://arxiv.org/pdf/2210.11750v1

  • Deep Generative Modeling of 3D LiDAR Data with Implicit Representation

    中嶋一斗, 岩下友美, 倉爪亮

    ロボティクスシンポジア予稿集   27th   2022   ISSN:1881-7300

  • Development of Distributed Sensor Pods for Evaluation of Compacted Ground-Quantification of Ground Stiffness Based on Waveform Distortion of Multipoint Synchronous Vibration Data-

    福田健太郎, 中嶋一斗, 倉爪亮

    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)   2022   2022   ISSN:2424-3124

  • 3D LiDARセンサの点群投影方式による計測距離と歩行方向に対する歩容認証の頑健性評価

    安正鎬, 中嶋一斗, 吉野弘毅, 岩下友美, 岩下友美, 倉爪亮

    日本ロボット学会学術講演会予稿集(CD-ROM)   40th   2022

  • Development of Retrofit Type Backhoe Remote Control System

    西浦悠生, 中嶋一斗, 倉爪亮

    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)   2022   2022   ISSN:2424-3124

  • A Method for Evaluating Ground Stiffness Based on Waveform Distortion of Multipoint Synchronous Vibration Data

    福田健太郎, 中嶋一斗, 倉爪亮

    建設ロボットシンポジウム論文集(CD-ROM)   20th   2022

  • 歩容特徴の抽出精度向上のための異なる人物間の特徴交換を用いた歩容認証

    吉野弘毅, 中嶋一斗, 安正鎬, 岩下友美, 岩下友美, 倉爪亮

    日本ロボット学会学術講演会予稿集(CD-ROM)   40th   2022

  • ユマニチュード介護の「触れる」スキルの評価と被介護者の情動の変化の解明

    安積諒馬, AN Qi, 中嶋一斗, 倉爪亮

    日本ロボット学会学術講演会予稿集(CD-ROM)   40th   2022

▼display all

Professional Memberships

  • Information Processing Society of Japan (IPSJ)

    2025.6 - Present

      More details

  • The Society of Instrument and Control Engineers (SICE)

    2024.10 - Present

      More details

  • IEEE Robotics and Automation Society (RAS)

    2024.2 - Present

      More details

  • The Robotics Society of Japan (RSJ)

    2017.2 - Present

      More details

  • IEEE

    2016.10 - Present

      More details

Committee Memberships

  • ロボティクスシンポジア   プログラム委員  

    2024.3 - Present   

      More details

    Committee type:Academic society

    researchmap

Academic Activities

  • 会場担当

    第31回インテリジェント・システム・シンポジウム (FAN 2023)  ( Japan ) 2023.9

     More details

    Type:Competition, symposium, etc. 

  • Screening of academic papers

    Role(s): Peer review

    2023

     More details

    Type:Peer review 

    Number of peer-reviewed articles in foreign language journals:5

    Proceedings of International Conference Number of peer-reviewed papers:6

    Proceedings of domestic conference Number of peer-reviewed papers:1

  • Screening of academic papers

    Role(s): Peer review

    2022

     More details

    Type:Peer review 

    Number of peer-reviewed articles in foreign language journals:3

    Number of peer-reviewed articles in Japanese journals:1

    Proceedings of International Conference Number of peer-reviewed papers:2

  • Screening of academic papers

    Role(s): Peer review

    2021

     More details

    Type:Peer review 

    Number of peer-reviewed articles in foreign language journals:2

    Proceedings of International Conference Number of peer-reviewed papers:1

  • Screening of academic papers

    Role(s): Peer review

    2020

     More details

    Type:Peer review 

    Proceedings of International Conference Number of peer-reviewed papers:1

  • Screening of academic papers

    Role(s): Peer review

    2019

     More details

    Type:Peer review 

    Proceedings of International Conference Number of peer-reviewed papers:1

▼display all

Research Projects

  • Development of a Realistic LiDAR Simulator based on Deep Generative Models

    Grant number:23K16974  2023.4 - 2025.3

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Early-Career Scientists

    中嶋 一斗

      More details

    Authorship:Principal investigator  Grant type:Scientific research funding

    自律移動ロボットの正確な環境認識を実現するために,3D LiDAR センサから得られる点群データに基づく機械学習モデルが注目されているが,モデル学習に必要な大規模点群データのアノテーションコストは非常に高い.解決策の一つとして,シミュレータから自動的に合成したラベル付き点群を活用するアプローチがあるが,計測特性の再現度が低く,実環境への汎化性能が低下する問題がある.本研究では,3D LiDAR センサの計測特性を自動的にプロファイリングする深層生成モデルを開発し,合成データの写実性向上に応用する.

    CiNii Research

  • 深層生成モデリングを介した3D LiDARの反射特性学習とSim2Real応用

    2022

    研究スタートプログラム

      More details

    Authorship:Principal investigator  Grant type:On-campus funds, funds, etc.

  • Development of garbage collecting robot for marine microplastics

    Grant number:20H00230  2020.4 - 2025.3

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research (A)

    倉爪 亮, 中嶋 一斗, 宮内 翔子, 河村 晃宏, 安 ち

      More details

    Authorship:Coinvestigator(s)  Grant type:Scientific research funding

    特に九州周辺の離島の海岸で深刻な問題である破砕プラスチックごみに対し、検出から、移動、回収までの工程を自動あるいは遠隔で実行するロボットシステムの研究開発を行う。破砕プラスチックごみは1cm程度以下と小さく、また砂や貝殻と交じり合っているために、ロボットに搭載したカメラ等では検出は困難である。そこで3D LiDARから得られるリフレクタンスデータに着目した破砕プラスチックごみの検出原理を確立する。また準天頂衛星や地形計測群ロボットなどを活用した破砕プラスチックごみの回収ロボットを複数企業と協力して開発するとともに、在宅者やボランティアが遠隔地から作業に参加できるシステムを開発する。

    CiNii Research

  • 複数人称視点に基づく知能化空間の時空間記述とシーン再構成

    Grant number:19J12159  2019.4 - 2020.3

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Research Fellowships for Young Scientists

      More details

    Authorship:Principal investigator  Grant type:Joint research

Class subject

  • 電気情報工学実験

    2023.10 - 2024.3   Second semester

  • 物理情報学

    2025.4 - 2025.9   First semester

  • プログラミング演習(P)

    2024.6 - 2024.8   Summer quarter

FD Participation

  • 2025.4   Role:Participation   Title:令和7年度 第1回全学FD(新任教員FDの研修)The 1st All-University FD (training for new faculty members) in FY2025

    Organizer:University-wide

  • 2025.3   Role:Participation   Title:【シス情FD】各種表彰/フェロー称号等の戦略的獲得に向けて

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2024.11   Role:Participation   Title:【シス情FD】脳内シナプスの分子マッピングとその情報処理メカニズムの解明

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2024.7   Role:Participation   Title:【シス情FD】ソーシャルロボットにおけるELSI実証研究と標準化

    Organizer:[Undergraduate school/graduate school/graduate faculty]

Media Coverage

Travel Abroad

  • 2017.10 - 2017.12

    Staying countory name 1:United States   Staying institution name 1:NASA Jet Propulsion Laboratory, California Institute of Technology