2024/11/07 更新

お知らせ

 

写真a

ナカシマ カズト
中嶋 一斗
NAKASHIMA KAZUTO
所属
システム情報科学研究院 准教授
工学部 電気情報工学科(併任)
システム情報科学府 情報理工学専攻(併任)
職名
准教授
連絡先
メールアドレス
プロフィール
コンピュータビジョン・機械学習の研究に従事
ホームページ

学位

  • 博士(工学)

研究テーマ・研究キーワード

  • 研究テーマ:3次元測域センサ (3D LiDAR) の欠損修復・ドメイン適応

    研究キーワード:3D LiDAR, 深層学習

    研究期間: 2020年4月

  • 研究テーマ:3次元測域センサ (3D LiDAR) による屋外シーン理解

    研究キーワード:3D LiDAR, 深層学習

    研究期間: 2016年10月 - 2020年12月

  • 研究テーマ:人・ロボット共生空間の自然言語ライフログ生成

    研究キーワード:人・ロボット共生空間, 知能化空間, ライフログ, 深層学習

    研究期間: 2016年4月 - 2020年12月

受賞

  • Contribution Award

    2019年11月   Joint Workshop on Machine Perception and Robotics (MPR)  

  • Best Poster Presentation Award

    2018年10月   Joint Workshop on Machine Perception and Robotics (MPR)  

  • 学生奨励賞

    2017年8月   画像の認識・理解シンポジウム (MIRU)  

  • Best Service Robotics Paper Award Finalist

    2017年5月   IEEE International Conference on Robotics and Automation (ICRA)  

論文

  • Learning Viewpoint-Invariant Features for LiDAR-Based Gait Recognition 査読 国際誌

    Jeongho Ahn, Kazuto Nakashima, Koki Yoshino, Yumi Iwashita, Ryo Kurazume

    IEEE Access   2023年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  • Lifelogging Caption Generation via Fourth-Person Vision in a Human-Robot Symbiotic Environment 査読 国際誌

    Kazuto Nakashima, Yumi Iwashita, Ryo Kurazume

    ROBOMECH Journal   2020年9月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  • Virtual IR Sensing for Planetary Rovers: Improved Terrain Classification and Thermal Inertia Estimation 査読 国際誌

    Yumi Iwashita, Kazuto Nakashima, Joseph Gatto, Shoya Higa, Norris Khoo, Ryo Kurazume, Adrian Stoica

    IEEE Robotics and Automation Letters (RA-L)   2020年8月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  • Fukuoka Datasets for Place Categorization 査読 国際誌

    Oscar Martinez Mozos, Kazuto Nakashima, Hojung Jung, Yumi Iwashita, Ryo Kurazume

    International Journal of Robotics Research (IJRR)   2019年3月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  • Learning Geometric and Photometric Features from Panoramic LiDAR Scans for Outdoor Place Categorization 査読 国際誌

    Kazuto Nakashima, Hojung Jung, Yuki Oto, Yumi Iwashita, Ryo Kurazume, Oscar Martinez Mozos

    Advanced Robotics (AR)   2018年7月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  • RGB-Based Gait Recognition With Disentangled Gait Feature Swapping

    Yoshino, K; Nakashima, K; Ahn, J; Iwashita, Y; Kurazume, R

    IEEE ACCESS   12   115515 - 115531   2024年   ISSN:2169-3536

     詳細を見る

    出版者・発行元:IEEE Access  

    Gait recognition enables the non-contact identification of individuals from a distance based on their walking patterns and body shapes. For vision-based gait recognition, covariates (e.g., clothing, baggage and background) can negatively impact identification. As a result, many existing studies extract gait features from silhouettes or skeletal information obtained through preprocessing, rather than directly from RGB image sequences. In contrast to preprocessing which relies on the fitting accuracy of models trained on different tasks, disentangled representation learning (DRL) is drawing attention as a method for directly extracting gait features from RGB image sequences. However, DRL learns to extract features of the target attribute from the differences among multiple inputs with various attributes, which means its separation performance depends on the variation and amount of the training data. In this study, aiming to enhance the variation and quantity of each subject's videos, we propose a novel data augmentation pipeline by feature swapping for RGB-based gait recognition. To expand the variety of training data, features of posture and covariates separated through DRL are paired with features extracted from different individuals, which enables the generation of images of subjects with new attributes. Dynamic gait features are extracted through temporal modeling from pose features of each frame, not only from real images but also from generated ones. The experiments demonstrate that the proposed pipeline increases both the quality of generated images and the identification accuracy. The proposed method also outperforms the RGB-based state-of-the-art method in most settings.

    DOI: 10.1109/ACCESS.2024.3445415

    Web of Science

    Scopus

  • Development of a Retrofit Backhoe Teleoperation System Using Cat Command

    Shibata K., Nishiura Y., Tamaishi Y., Matsumoto K., Nakashima K., Kurazume R.

    2024 IEEE/SICE International Symposium on System Integration, SII 2024   1486 - 1491   2024年   ISBN:9798350312072

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:2024 IEEE/SICE International Symposium on System Integration, SII 2024  

    Most existing retrofit remote-control systems for backhoes are large, hard-to-install, and expensive. Therefore, we propose an easy-to-install and inexpensive teleoperation system. The proposed system comprised remote-control and sensing systems. The remote-control system retrofits robot arm-based devices to 'Cat Command', a compact embedded teleoperation system with a limited communication range, and controls these devices via a 5G commercial network to realize control from a remote office. Because this system does not require any additional modifications to the embedded control unit in the cockpit, the operator can continue working in the cockpit even if the backhoe is remotely controlled. The system enables the remote control of various devices from an extremely long distance by changing the joint parts between the robot arm and the embedded remote-control device. The sensing system estimates the posture and position of the backhoe by attaching original sensing devices to the backhoe. In addition, a 360 camera was installed in the cockpit to transmit work images from the construction site to a remote office in real time. The sensing device was smaller and lighter than conventional devices. We confirmed that the proposed system can be used to operate a construction site backhoe from a remote office, and that the system can be used to excavate soil using an actual backhoe.

    DOI: 10.1109/SII58957.2024.10417625

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/sii/sii2024.html#ShibataNTMNK24

  • LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models

    Nakashima K., Kurazume R.

    Proceedings - IEEE International Conference on Robotics and Automation   14724 - 14731   2024年   ISSN:10504729 ISBN:9798350384574

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Proceedings - IEEE International Conference on Robotics and Automation  

    Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots, such as scalable simulation, scene manipulation, and sparse-to-dense completion of LiDAR point clouds. While existing approaches have demonstrated the feasibility of image-based LiDAR data generation using deep generative models, they still struggle with fidelity and training stability. In this work, we present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds based on the image representation of range and reflectance intensity. Our method is built upon denoising diffusion probabilistic models (DDPMs), which have shown impressive results among generative model frameworks in recent years. To effectively train DDPMs in the LiDAR domain, we first conduct an in-depth analysis of data representation, loss functions, and spatial inductive biases. Leveraging our R2DM model, we also introduce a flexible LiDAR completion pipeline based on the powerful capabilities of DDPMs. We demonstrate that our method surpasses existing methods in generating tasks on the KITTI-360 and KITTI-Raw datasets, as well as in the completion task on the KITTI-360 dataset. Our project page can be found at https://kazuto1011.github.io/r2dm.

    DOI: 10.1109/ICRA57147.2024.10611480

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/icra/icra2024.html#NakashimaK24

  • Analysis of Force Applied to Horizontal and Vertical Handrails with Impaired Motor Function

    Kihara, R; An, Q; Takita, K; Ishiguro, S; Nakashima, K; Kurazume, R

    2023 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION, SII   1 - 6   2023年   ISSN:2474-2317 ISBN:979-8-3503-9868-7

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:2023 IEEE/SICE International Symposium on System Integration, SII 2023  

    People depend on medical equipment to support their movements when their motor function declines. Our previous study developed a method to estimate motor function from the force applied to a vertical handrail while standing. However, the effect of the handrail direction on movement remains unclear. Additionally, the force applied to the handrail and floor reaction forces on the buttocks and feet may also change with a decline in motor function. Here, this study constructed a system with force plates and handles in both the horizontal and vertical directions to measure the forces applied to the handrails, buttocks, and feet. Furthermore, the change in accuracy of the estimation of motor function, depending on the direction of the handrails and input information, was investigated. In the experiment, healthy participants stood up using a handrail with unrestricted movement and while wearing elderly experience kits that artificially impaired their motor function. The results showed that people exert more downward force on horizontal handrails than on vertical handrails. However, people rely on the vertical handrail for a longer period of time to stabilize anterior-posterior movement. These results indicate that different directions of handrails cause different strategies of the standing-up motion. Additionally, the accuracy of the estimation of motor function improved when the horizontal handrail was used rather than the vertical handrail. This suggests that the classification accuracy could be improved by using different handrail directions, depending on the subject's condition and standing-up motion.

    DOI: 10.1109/SII55687.2023.10039452

    Web of Science

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/sii/sii2023.html#KiharaATINK23

  • Evaluation of ground stiffness using multiple accelerometers on the ground during compaction by vibratory rollers

    Tamaishi Y., Fukuda K., Nakashima K., Maeda R., Matsumoto K., Kurazume R.

    Proceedings of the International Symposium on Automation and Robotics in Construction   262 - 269   2023年   ISBN:9780645832204

     詳細を見る

    出版者・発行元:Proceedings of the International Symposium on Automation and Robotics in Construction  

    Soil compaction is one of the most important basic elements in construction work because it directly affects the quality of structures. Compaction work using vibratory rollers is generally applied to strengthen ground stiffness, and the method that focuses on the number of compaction cycles is widely used to manage the ground stiffness by vibratory rollers. In contrast to this method, the continuous compaction control (CCC) using accelerometers installed on the vibratory rollers has been proposed as a quantitative evaluation method more suited to actual ground conditions. This method quantifies the distortion rate of the acceleration waveform of the vibratory roller. However, this method based on acceleration response has problems in measurement discrimination accuracy and sensor durability because the accelerometer is installed on the vibration roller, which is the source of vibration. In this paper, we propose a new ground stiffness evaluation method using multiple accelerometers installed on the ground surface. The proposed method measures the acceleration response during compaction work by vibratory rollers using multiple accelerometers on the ground surface. Experiments show the proposed method has a higher discrimination than the conventional methods.

    DOI: 10.22260/ISARC2023/0037

    Scopus

  • Generative Range Imaging for Learning Scene Priors of 3D LiDAR Data

    Nakashima, K; Iwashita, Y; Kurazume, R

    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV)   1256 - 1266   2023年   ISSN:2472-6737 ISBN:978-1-6654-9346-8

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023  

    3D LiDAR sensors are indispensable for the robust vision of autonomous mobile robots. However, deploying LiDAR-based perception algorithms often fails due to a domain gap from the training environment, such as inconsistent angular resolution and missing properties. Existing studies have tackled the issue by learning inter-domain mapping, while the transferability is constrained by the training configuration and the training is susceptible to peculiar lossy noises called ray-drop. To address the issue, this paper proposes a generative model of LiDAR range images applicable to the data-level domain transfer. Motivated by the fact that LiDAR measurement is based on point-by-point range imaging, we train an implicit image representation-based generative adversarial networks along with a differentiable ray-drop effect. We demonstrate the fidelity and diversity of our model in comparison with the point-based and image-based state-of-the-art generative models. We also showcase upsampling and restoration applications. Furthermore, we introduce a Sim2Real application for LiDAR semantic segmentation. We demonstrate that our method is effective as a realistic ray-drop simulator and outperforms state-of-the-art methods.

    DOI: 10.1109/WACV56688.2023.00131

    Web of Science

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/wacv/wacv2023.html#NakashimaIK23

  • Learning Viewpoint-Invariant Features for LiDAR-Based Gait Recognition

    Ahn, J; Nakashima, K; Yoshino, K; Iwashita, Y; Kurazume, R

    IEEE ACCESS   11   129749 - 129762   2023年   ISSN:2169-3536

     詳細を見る

    掲載種別:研究論文(学術雑誌)   出版者・発行元:IEEE Access  

    Gait recognition is a biometric identification method based on individual walking patterns. This modality is applied in a wide range of applications, such as criminal investigations and identification systems, since it can be performed at a long distance and requires no cooperation of interests. In general, cameras are used for gait recognition systems, and previous studies have utilized depth information captured by RGB-D cameras, such as Microsoft Kinect. In recent years, multi-layer LiDAR sensors, which can obtain range images of a target at a range of over 100 m in real time, have attracted significant attention in the field of autonomous mobile robots and self-driving vehicles. Compared with general cameras, LiDAR sensors have rarely been used for biometrics due to the low point cloud densities captured at long distances. In this study, we focus on improving the robustness of gait recognition using LiDAR sensors under confounding conditions, specifically addressing the challenges posed by viewing angles and measurement distances. First, our recognition model employs a two-scale spatial resolution to enhance immunity to varying point cloud densities. In addition, this method learns the gait features from two invariant viewpoints (i.e., left-side and back views) generated by estimating the walking direction. Furthermore, we propose a novel attention block that adaptively recalibrates channel-wise weights to fuse the features from the aforementioned resolutions and viewpoints. Comprehensive experiments conducted on our dataset demonstrate that our model outperforms existing methods, particularly in cross-view, cross-distance challenges, and practical scenarios.

    DOI: 10.1109/ACCESS.2023.3333037

    Web of Science

    Scopus

    researchmap

  • 転圧地盤評価のための分散型センサポッドの開発

    福田 健太郎, 中嶋 一斗, 倉爪 亮

    ロボティクス・メカトロニクス講演会講演概要集   2022 ( 0 )   1A1-E04   2022年   eISSN:24243124

     詳細を見る

    記述言語:日本語   出版者・発行元:一般社団法人 日本機械学会  

    DOI: 10.1299/jsmermd.2022.1a1-e04

    CiNii Research

  • 2V-Gait: Gait Recognition using 3D LiDAR Robust to Changes in Walking Direction and Measurement Distance

    Ahn, J; Nakashima, K; Yoshino, K; Iwashita, Y; Kurazume, R

    2022 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII 2022)   602 - 607   2022年   ISSN:2474-2317 ISBN:978-1-6654-4540-5

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:2022 IEEE/SICE International Symposium on System Integration, SII 2022  

    Gait recognition, which is a biometric identifier for individual walking patterns, is utilized in many applications, such as criminal investigation and identification systems, because it can be applied at a long distance and requires no explicit cooperation of the subjects. In general, cameras are used for gait recognition, and several methods in previous studies have used depth information captured by RGB-D cameras. However, RGB-D cameras are limited in terms of their measurement distance and are difficult to access outdoors. In recent years, real-time multi-layer 3D LiDAR, which can obtain 3D range images of a target at ranges of over 100 m, has attracted significant attention for use in autonomous mobile robots, serving as eyes for obstacles detection and navigation. Compared with cameras, such 3D LiDAR has rarely been used for biometrics owing to its low spatial resolution. However, considering the unique characteristics of 3D LiDAR, such as the robustness of the illumination conditions, long measurement distances, and wide-angle scanning, the approach has the potential to be applied outdoors as a biometric identifier. The present paper describes a gait recognition system, called 2V-Gait, which is robust to variations in the walking direction of a subject and the distance measured from the 3D LiDAR. To improve the performance of gait recognition, we leverage the unique characteristics of 3D LiDAR, which are not included in regular cameras. Extensive experiments on our dataset show the effectiveness of the proposed approach.

    DOI: 10.1109/SII52469.2022.9708899

    Web of Science

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/sii/sii2022.html#AhnNYIK22

  • Gait Recognition using Identity-Aware Adversarial Data Augmentation

    Yoshino, K; Nakashima, K; Ahn, J; Iwashita, Y; Kurazume, R

    2022 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII 2022)   596 - 601   2022年   ISSN:2474-2317 ISBN:978-1-6654-4540-5

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:2022 IEEE/SICE International Symposium on System Integration, SII 2022  

    Gait recognition is a non-contact person identification method that utilizes cameras installed at a distance. However, gait images contain person-agnostic elements (covariates) such as clothing, and the removal of covariates is important for identification with high performance. Disentanglement representation learning, which separates gait-dependent information such as posture from covariates by unsupervised learning, has been attracting attention as a method to remove covariates. However, because the amount of gait data is negligible compared to other computer vision tasks, such as image recognition, the separation performance of existing methods is insufficient. In this study, we propose a gait recognition method to improve the separation performance, which augments the training data by adversarial generation based on identity features, separated by disentanglement representation learning. The proposed method first separates gait-dependent features (pose features) and appearance-related covariate features (style features) from gait videos based on disentanglement representation learning. Then, synthesized gait images are generated by exchanging pose features between gait images of the person under different walking conditions, followed by adding them to the training data. The experiments indicate that our method can improve the separation performance, and generate high-quality gait images that are effective for data augmentation.

    DOI: 10.1109/SII52469.2022.9708776

    Web of Science

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/sii/sii2022.html#YoshinoNAIK22

  • Understanding Humanitude Care for Sit-to-stand Motion by Wearable Sensors

    An Q., Tanaka A., Nakashima K., Sumioka H., Shiomi M., Kurazume R.

    Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics   2022-October   1874 - 1879   2022年   ISSN:1062922X ISBN:9781665452588

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics  

    Assisting patients with dementia is a significant social issue. Currently, to assist patients with dementia, a multimodal care technique called Humanitude is gaining popularity. In Humanitude, the patients are assisted through various techniques to stand up independently by utilizing their motor functions as much as possible. Humanitude care techniques encourage caregivers to increase the area of contact with patients during the sit-to-stand motion. However, Humanitude care techniques are not accurately performed by novice caregivers. Therefore, in this study, a smock-type wearable sensor was developed to measure the proximity between caregivers and care recipients during sit-to-stand motion assistance. A measurement experiment was conducted to evaluate the proximity differences between Humanitude care and simulated novice care. In addition, the effects of different care techniques on the center of mass (CoM) trajectory and muscle activity of the care recipients were investigated. The results showed that the caregivers tend to bring their top and middle trunk closer in Humanitude care compared with novice simulated care. Furthermore, it was observed that the CoM trajectory and muscle activity under Humanitude care were similar to those observed when the care recipient stands up independently. These results validate the effectiveness of Humanitude care and provide useful information for teaching techniques in Humanitude.

    DOI: 10.1109/SMC53654.2022.9945156

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/smc/smc2022.html#AnTNSSK22

  • レトロフィット型バックホウ遠隔操縦システムの開発

    西浦 悠生, 中嶋 一斗, 倉爪 亮

    ロボティクス・メカトロニクス講演会講演概要集   2022 ( 0 )   1P1-C07   2022年   eISSN:24243124

     詳細を見る

    記述言語:日本語   出版者・発行元:一般社団法人 日本機械学会  

    DOI: 10.1299/jsmermd.2022.1p1-c07

    CiNii Research

▼全件表示

講演・口頭発表等

  • LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models 国際会議

    Kazuto Nakashima, Ryo Kurazume

    IEEE International Conference on Robotics and Automation (ICRA)  2024年5月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:日本国  

  • Leaning to Drop Points for LiDAR Scan Synthesis 国際会議

    Kazuto Nakashima, Ryo Kurazume

    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  2021年9月 

     詳細を見る

    記述言語:英語  

    国名:チェコ共和国  

  • Generative Range Imaging for Learning Scene Priors of 3D LiDAR Data 国際会議

    Kazuto Nakashima, Yumi Iwashita, Ryo Kurazume

    IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)  2023年1月 

     詳細を見る

    記述言語:英語  

    国名:アメリカ合衆国  

    リポジトリ公開URL: https://hdl.handle.net/2324/7232999

  • Fourth-Person Sensing for Pro-active Services 国際会議

    Yumi Iwashita, Kazuto Nakashima, Yoonseok Pyo, Ryo Kurazume

    International Conference on Emerging Security Technologies (EST)  2014年9月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:スペイン  

  • Fourth-Person Sensing for a Service Robot 国際会議

    Kazuto Nakashima, Yumi Iwashita, Pyo Yoonseok, Asamichi Takamine, Ryo Kurazume

    IEEE Sensors  2015年11月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:大韓民国  

  • Automatic Houseware Registration System for Informationally-Structured Environment 国際会議

    Kazuto Nakashima, Julien Girard, Yumi Iwashita, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration (SII)  2016年12月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:日本国  

  • Feasibility Study of IoRT Platform "Big Sensor Box" 国際会議

    Ryo Kurazume, Yoonseok Pyo, Kazuto Nakashima, Akihiro Kawamura, Tokuo Tsuji

    IEEE International Conference on Robotics and Automation (ICRA)  2017年5月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:シンガポール共和国  

  • Previewed Reality: Near-Future Perception System 国際会議

    Yuta Horikawa, Asuka Egashira, Kazuto Nakashima, Akihiro Kawamura, Ryo Kurazume

    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  2017年9月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:カナダ  

  • Recognizing Outdoor Scenes by Convolutional Features of Omni-Directional LiDAR Scans 国際会議

    Kazuto Nakashima, Seungwoo Nham, Hojung Jung, Yumi Iwashita, Ryo Kurazume, Oscar M Mozos

    IEEE/SICE International Symposium on System Integration (SII)  2017年12月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:台湾  

  • Virtual Sensors Determined Through Machine Learning 国際会議

    Yumi Iwashita, Adrian Stoica, Kazuto Nakashima, Ryo Kurazume, Jim Torresen

    World Automation Congress (WAC)  2018年6月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:アメリカ合衆国  

  • Fourth-person Captioning: Describing Daily Events by Uni-supervised and Tri-regularized Training 国際会議

    Kazuto Nakashima, Yumi Iwashita, Akihiro Kawamura, Ryo Kurazume

    IEEE International Conference on Systems, Man, and Cybernetics (SMC)  2018年10月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:日本国  

  • TU-Net and TDeepLab: Deep Learning-based Terrain Classification Robust to Illumination Changes, Combining Visible and Thermal Imagery 国際会議

    Yumi Iwashita, Kazuto Nakashima, Adrian Stoica, Ryo Kurazume

    IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)  2019年3月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:アメリカ合衆国  

  • MU-Net: Deep Learning-Based Thermal IR Image Estimation From RGB Image 国際会議

    Yumi Iwashita, Kazuto Nakashima, Sir Rafol, Adrian Stoica, Ryo Kurazume

    IEEE/CVF Computer Vision and Pattern Recognition Conference Workshops (CVPRW)  2019年6月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:アメリカ合衆国  

  • 2V-Gait: Gait Recognition Using 3D LiDAR Robust to Changes in Walking Direction and Measurement Distance 国際会議

    Jeongho Ahn, Kazuto Nakashima, Koki Yoshino, Yumi Iwashita, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration (SII)  2022年1月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:ノルウェー王国  

  • Gait Recognition using Identity-Aware Adversarial Data Augmentation 国際会議

    Koki Yoshino, Kazuto Nakashima, Jeongho Ahn, Yumi Iwashita, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration (SII)  2022年1月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:ノルウェー王国  

  • Understanding Humanitude Care for Sit-To-Stand Motion by Wearable Sensors 国際会議

    Qi An, Akito Tanaka, Kazuto Nakashima, Hidenobu Sumioka, Masahiro Shiomi, Ryo Kurazume

    IEEE International Conference on Systems, Man, and Cybernetics (SMC)  2022年10月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:チェコ共和国  

  • Analysis of Force Applied to Horizontal and Vertical Handrails with Impaired Motor Function 国際会議

    Ryoya Kihara, Qi An, Kensuke Takita, Shu Ishiguro, Kazuto Nakashima, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration (SII)  2023年1月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:アメリカ合衆国  

  • Evaluation of Ground Stiffness using Multiple Accelerometers on the Ground during Compaction by Vibratory Rollers 国際会議

    Yusuke Tamaishi, Kentaro Fukuda, Kazuto Nakashima, Ryuichi Maeda, Kohei Matsumoto, Ryo Kurazume

    International Symposium on Automation and Robotics in Construction (ISARC)  2023年7月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:インド  

  • ROS2-TMS for Construction: CPS platform for earthwork sites 国際会議

    Ryuichi Maeda, Kohei Matsumoto, Tomoya Kouno, Tomoya Itsuka, Kazuto Nakashima, Yusuke Tamaishi, Ryo Kurazume

    International Symposium on Artificial Life and Robotics (AROB)  2024年1月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(招待・特別)  

    国名:日本国  

  • Development of a Retrofit Backhoe Teleoperation System using Cat Command 国際会議

    Koshi Shibata, Yuki Nishiura, Yusuke Tamaishi, Kohei Matsumoto, Kazuto Nakashima, Ryo Kurazume

    IEEE/SICE International Symposium on System Integration (SII)  2024年1月 

     詳細を見る

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:ベトナム社会主義共和国  

▼全件表示

MISC

  • Fast LiDAR Upsampling using Conditional Diffusion Models

    Sander Elias Magnussen Helgesen, Kazuto Nakashima, Jim Tørresen, Ryo Kurazume

    CoRR   abs/2405.04889   2024年5月

     詳細を見る

    The search for refining 3D LiDAR data has attracted growing interest
    motivated by recent techniques such as supervised learning or generative
    model-based methods. Existing approaches have shown the possibilities for using
    diffusion models to generate refined LiDAR data with high fidelity, although
    the performance and speed of such methods have been limited. These limitations
    make it difficult to execute in real-time, causing the approaches to struggle
    in real-world tasks such as autonomous navigation and human-robot interaction.
    In this work, we introduce a novel approach based on conditional diffusion
    models for fast and high-quality sparse-to-dense upsampling of 3D scene point
    clouds through an image representation. Our method employs denoising diffusion
    probabilistic models trained with conditional inpainting masks, which have been
    shown to give high performance on image completion tasks. We introduce a series
    of experiments, including multiple datasets, sampling steps, and conditional
    masks. This paper illustrates that our method outperforms the baselines in
    sampling speed and quality on upsampling tasks using the KITTI-360 dataset.
    Furthermore, we illustrate the generalization ability of our approach by
    simultaneously training on real-world and synthetic datasets, introducing
    variance in quality and environments.

    DOI: 10.48550/arXiv.2405.04889

    arXiv

    researchmap

    その他リンク: http://arxiv.org/pdf/2405.04889v2

  • LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models.

    Kazuto Nakashima, Ryo Kurazume

    CoRR   abs/2309.09256   2023年9月

     詳細を見る

    Generative modeling of 3D LiDAR data is an emerging task with promising
    applications for autonomous mobile robots, such as scalable simulation, scene
    manipulation, and sparse-to-dense completion of LiDAR point clouds. Existing
    approaches have shown the feasibility of image-based LiDAR data generation
    using deep generative models while still struggling with the fidelity of
    generated data and training instability. In this work, we present R2DM, a novel
    generative model for LiDAR data that can generate diverse and high-fidelity 3D
    scene point clouds based on the image representation of range and reflectance
    intensity. Our method is based on the denoising diffusion probabilistic models
    (DDPMs), which have demonstrated impressive results among generative model
    frameworks and have been significantly progressing in recent years. To
    effectively train DDPMs on the LiDAR domain, we first conduct an in-depth
    analysis regarding data representation, training objective, and spatial
    inductive bias. Based on our designed model R2DM, we also introduce a flexible
    LiDAR completion pipeline using the powerful properties of DDPMs. We
    demonstrate that our method outperforms the baselines on the generation task of
    KITTI-360 and KITTI-Raw datasets and the upsampling task of KITTI-360 datasets.
    Our code and pre-trained weights will be available at
    https://github.com/kazuto1011/r2dm.

    DOI: 10.48550/arXiv.2309.09256

    arXiv

    researchmap

    その他リンク: http://arxiv.org/pdf/2309.09256v1

  • 土木工事における地盤剛性評価・安全管理のための分散型センサポッドの開発

    福田健太郎, 中嶋一斗, 玉石祐介, 玉石祐介, 前田龍一, 松本耕平, 倉爪亮

    ロボティクスシンポジア予稿集   28th   2023年   ISSN:1881-7300

     詳細を見る

  • Generative Range Imaging for Learning Scene Priors of 3D LiDAR Data.

    Kazuto Nakashima, Yumi Iwashita, Ryo Kurazume

    CoRR   abs/2210.11750   2022年10月

     詳細を見る

    3D LiDAR sensors are indispensable for the robust vision of autonomous mobile
    robots. However, deploying LiDAR-based perception algorithms often fails due to
    a domain gap from the training environment, such as inconsistent angular
    resolution and missing properties. Existing studies have tackled the issue by
    learning inter-domain mapping, while the transferability is constrained by the
    training configuration and the training is susceptible to peculiar lossy noises
    called ray-drop. To address the issue, this paper proposes a generative model
    of LiDAR range images applicable to the data-level domain transfer. Motivated
    by the fact that LiDAR measurement is based on point-by-point range imaging, we
    train an implicit image representation-based generative adversarial networks
    along with a differentiable ray-drop effect. We demonstrate the fidelity and
    diversity of our model in comparison with the point-based and image-based
    state-of-the-art generative models. We also showcase upsampling and restoration
    applications. Furthermore, we introduce a Sim2Real application for LiDAR
    semantic segmentation. We demonstrate that our method is effective as a
    realistic ray-drop simulator and outperforms state-of-the-art methods.

    DOI: 10.48550/arXiv.2210.11750

    arXiv

    researchmap

    その他リンク: http://arxiv.org/pdf/2210.11750v1

  • 陰的画像表現を用いた3D LiDARデータの深層生成モデリング

    中嶋一斗, 岩下友美, 倉爪亮

    ロボティクスシンポジア予稿集   27th   2022年   ISSN:1881-7300

     詳細を見る

  • 3D LiDARセンサの点群投影方式による計測距離と歩行方向に対する歩容認証の頑健性評価

    安正鎬, 中嶋一斗, 吉野弘毅, 岩下友美, 岩下友美, 倉爪亮

    日本ロボット学会学術講演会予稿集(CD-ROM)   40th   2022年

     詳細を見る

  • レトロフィット型バックホウ遠隔操縦システムの開発

    西浦悠生, 中嶋一斗, 倉爪亮

    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)   2022   2022年   ISSN:2424-3124

     詳細を見る

  • 多点同期振動データの波形歪みに基づく地盤剛性評価手法の提案

    福田健太郎, 中嶋一斗, 倉爪亮

    建設ロボットシンポジウム論文集(CD-ROM)   20th   2022年

     詳細を見る

  • 歩容特徴の抽出精度向上のための異なる人物間の特徴交換を用いた歩容認証

    吉野弘毅, 中嶋一斗, 安正鎬, 岩下友美, 岩下友美, 倉爪亮

    日本ロボット学会学術講演会予稿集(CD-ROM)   40th   2022年

     詳細を見る

  • 転圧地盤評価のための分散型センサポッドの開発-第2報 多点同期振動データの波形歪みに基づく地盤剛性の定量化-

    福田健太郎, 中嶋一斗, 倉爪亮

    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)   2022   2022年   ISSN:2424-3124

     詳細を見る

▼全件表示

所属学協会

  • 日本ロボット学会

  • IEEE Robotics and Automation Society (RAS)

  • IEEE

学術貢献活動

  • 会場担当

    第31回インテリジェント・システム・シンポジウム (FAN 2023)  2023年9月

     詳細を見る

    種別:大会・シンポジウム等 

  • 学術論文等の審査

    役割:査読

    2023年

     詳細を見る

    種別:査読等 

    外国語雑誌 査読論文数:5

    国際会議録 査読論文数:6

    国内会議録 査読論文数:1

  • 学術論文等の審査

    役割:査読

    2022年

     詳細を見る

    種別:査読等 

    外国語雑誌 査読論文数:3

    日本語雑誌 査読論文数:1

    国際会議録 査読論文数:2

  • 学術論文等の審査

    役割:査読

    2021年

     詳細を見る

    種別:査読等 

    外国語雑誌 査読論文数:2

    国際会議録 査読論文数:1

  • 学術論文等の審査

    役割:査読

    2020年

     詳細を見る

    種別:査読等 

    国際会議録 査読論文数:1

  • 学術論文等の審査

    役割:査読

    2019年

     詳細を見る

    種別:査読等 

    国際会議録 査読論文数:1

▼全件表示

共同研究・競争的資金等の研究課題

  • 深層生成モデルに基づく写実的なLiDARシミュレータの開発

    2023年4月 - 2026年3月

      詳細を見る

    担当区分:研究代表者 

  • 深層生成モデルに基づく写実的なLiDARシミュレータの開発

    研究課題/領域番号:23K16974  2023年 - 2025年

    日本学術振興会  科学研究費助成事業  若手研究

      詳細を見る

    担当区分:研究代表者  資金種別:科研費

  • 深層生成モデリングを介した3D LiDARの反射特性学習とSim2Real応用

    2022年

    研究スタートプログラム

      詳細を見る

    担当区分:研究代表者  資金種別:学内資金・基金等

  • 海洋破砕プラスチックごみ回収ロボットシステムに関する研究開発

    2020年4月 - 2025年3月

      詳細を見る

    担当区分:研究分担者 

  • 海洋破砕プラスチックごみ回収ロボットシステムに関する研究開発

    研究課題/領域番号:20H00230  2020年 - 2024年

    日本学術振興会  科学研究費助成事業  基盤研究(A)

      詳細を見る

    担当区分:研究分担者  資金種別:科研費

  • 複数人称視点に基づく知能化空間の時空間記述とシーン再構成

    2019年 - 2020年

    日本学術振興会  特別研究員

      詳細を見る

    担当区分:研究代表者  資金種別:共同研究

▼全件表示

担当授業科目

  • プログラミング演習

    2024年6月 - 2024年8月   夏学期

  • 電気情報工学実験

    2023年10月 - 2024年3月   後期

メディア報道

  • 九大、センサーポッドで地盤転圧の仕上がり評価 施工の無駄低減

    日刊工業新聞  2022年6月

     詳細を見る

    九大、センサーポッドで地盤転圧の仕上がり評価 施工の無駄低減

海外渡航歴

  • 2017年10月 - 2017年12月

    滞在国名1:アメリカ合衆国   滞在機関名1:NASA Jet Propulsion Laboratory, California Institute of Technology