Updated on 2025/06/28

Information

 

写真a

 
MASAI KATSUTOSHI
 
Organization
Faculty of Information Science and Electrical Engineering Department of Advanced Information Technology Assistant Professor
School of Engineering Department of Electrical Engineering and Computer Science(Concurrent)
Graduate School of Information Science and Electrical Engineering Department of Information Science and Technology(Concurrent)
Title
Assistant Professor
Profile
"Wearable facial expression recognition using light sensors" "Exploring methods for facilitating communication in VR" "HCI/VR technologies for supporting the acquisition of sports skills"
External link

Research Areas

  • Informatics / Human interface and interaction

Degree

  • Ph.D in Engineering ( 2018.9 Keio University )

Research History

  • Kyushu University Faculty of Information Science and Electrical Engineering Department of Advanced Information Technology  Assistant Professor 

    2023.4 - Present

Education

  • Keio University    

    2016.4 - 2018.9

Research Interests・Research Keywords

  • Research theme: - Wearable facial expression recognition system using optical sensor array - VR/IoT sports support system

    Keyword: Human Computer Interaction, Affective Computing, Wearable Computing

    Research period: 2023.6 - 2025.12

Awards

  • 第19回日本バーチャルリアリティ学会論文賞

    2017.9   バーチャルリアリティ学会   正井克俊,杉浦裕太,尾形正泰,クンツェ カイ,稲見昌彦,杉本麻樹「AffectiveWear: 装着者の日常的な表情を認識する眼鏡型装置」(論文誌,Vol.21, No.2 pp.-385-394,2016)

Papers

  • Facial Gesture Classification with Few-shot Learning Using Limited Calibration Data from Photo-reflective Sensors on Smart Eyewear Reviewed

    Katsutoshi Masai, Maki Sugimoto, Brian Iwana

    Proceedings of the International Conference on Mobile and Ubiquitous Multimedia   2024.12

     More details

    Authorship:Lead author, Corresponding author  

  • Seamless Avatar Expression Transfer: Merging Camera and Smart Eyewear Embedded with Photo-Reflective Sensor Technologies

    Masai Katsutoshi

    2024.3

     More details

    Language:English   Publisher:IEEE  

    Virtual Reality (VR) offers new ways to interact that are different from usual communication in the real world. In VR, avatars are key displays for showing users’ feelings and personalities non-verbally. Addressing the need for more convenient and continuous expression representation, we introduce a novel method for facial expression transfer in VR. This method integrates camera systems with photoreflective sensors embedded in eyewear, overcoming the limitations of traditional camera-based tracking. By offering smoother tracking and reducing manual calibration needs, this approach highlights the potential of multimodal technology to enhance non-verbal communication in virtual environments. Building on this, we demonstrated the example implementation of smile transfer and discussed future direction.

    CiNii Research

  • 顔の自己類似度が AI エージェントの印象に与える影響の解析 Reviewed

    @丹羽将康, 正井克俊, @吉田成朗, @杉本麻樹

    情報処理学会論文誌   2024.1

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

  • Seamless Avatar Expression Transfer: Merging Camera and Smart Eyewear Embedded with Photo-Reflective Sensor Technologies. Reviewed

    Katsutoshi Masai

    VR Workshops   591 - 593   2024   ISBN:979-8-3503-7450-6

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:Proceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024  

    Virtual Reality (VR) offers new ways to interact that are different from usual communication in the real world. In VR, avatars are key displays for showing users' feelings and personalities non-verbally. Addressing the need for more convenient and continuous expression representation, we introduce a novel method for facial expression transfer in VR. This method integrates camera systems with photo-reflective sensors embedded in eyewear, overcoming the limitations of traditional camera-based tracking. By offering smoother tracking and reducing manual calibration needs, this approach highlights the potential of multimodal technology to enhance non-verbal commu-nication in virtual environments. Building on this, we demonstrated the example implementation of smile transfer and discussed future direction.

    DOI: 10.1109/VRW62533.2024.00114

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/vr/vr2024w.html#Masai24

  • Analysis of Co-viewing with Virtual Agent Towards Affectively Immersive Interaction in Virtual Spaces

    Masai, K; Morita, K; Sugimoto, M

    2024 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ADJUNCT, ISMAR-ADJUNCT 2024   355 - 356   2024   ISSN:2771-1102 ISBN:979-8-3315-0692-6 eISSN:2771-1110

     More details

    Publisher:Proceedings 2024 IEEE International Symposium on Mixed and Augmented Reality Adjunct Ismar Adjunct 2024  

    This study investigates how co-viewing with agents in virtual spaces impacts user experiences and perceptions. Advances in virtual reality (VR) technology have enabled more immersive and natural interactions in virtual environments. Nonverbal communication plays a significant role in affectively immersive experiences for users. In our pilot study, participants co-viewed a comedy show with programmed agents that exhibited laughter and facial expressions. While overall statistical analysis showed no significant difference in viewing experiences with or without agents, some participants reported enhanced engagement and enjoyment when coviewing with agents. Insights from participants highlight the importance of individual preferences, agent design, and relationship-building to improve social VR systems.

    DOI: 10.1109/ISMAR-Adjunct64951.2024.00088

    Web of Science

    Scopus

  • Analyzing the Effect of Diverse Gaze and Head Direction on Facial Expression Recognition With Photo-Reflective Sensors Embedded in a Head-Mounted Display. Reviewed

    Fumihiko Nakamura, Masaaki Murakami, Katsuhiro Suzuki, Masaaki Fukuoka, Katsutoshi Masai, Maki Sugimoto

    IEEE Transactions on Visualization and Computer Graphics   29 ( 10 )   4124 - 4139   2023.10   ISSN:1941-0506

     More details

    Language:Others   Publishing type:Research paper (scientific journal)   Publisher:IEEE Transactions on Visualization and Computer Graphics  

    As one of the facial expression recognition techniques for Head-Mounted Display (HMD) users, embedded photo-reflective sensors have been used. In this paper, we investigate how gaze and face directions affect facial expression recognition using the embedded photo-reflective sensors. First, we collected a dataset of five facial expressions (Neutral, Happy, Angry, Sad, Surprised) while looking in diverse directions by moving 1) the eyes and 2) the head. Using the dataset, we analyzed the effect of gaze and face directions by constructing facial expression classifiers in five ways and evaluating the classification accuracy of each classifier. The results revealed that the single classifier that learned the data for all gaze points achieved the highest classification performance. Then, we investigated which facial part was affected by the gaze and face direction. The results showed that the gaze directions affected the upper facial parts, while the face directions affected the lower facial parts. In addition, by removing the bias of facial expression reproducibility, we investigated the pure effect of gaze and face directions in three conditions. The results showed that, in terms of gaze direction, building classifiers for each direction significantly improved the classification accuracy. However, in terms of face directions, there were slight differences between the classifier conditions. Our experimental results implied that multiple classifiers corresponding to multiple gaze and face directions improved facial expression recognition accuracy, but collecting the data of the vertical movement of gaze and face is a practical solution to improving facial expression recognition accuracy.

    DOI: 10.1109/TVCG.2022.3179766

    Scopus

    researchmap

  • Investigating Effects of Facial Self-Similarity Levels on the Impression of Virtual Agents in Serious/Non-Serious Contexts. Reviewed

    Masayasu Niwa, Katsutoshi Masai, Shigeo Yoshida, Maki Sugimoto

    Proceedings of the Augmented Humans International Conference 2023(AHs)   221 - 230   2023.3   ISBN:9781450399845

     More details

    Language:Others   Publishing type:Research paper (other academic)   Publisher:ACM  

    Recent technological advances have enabled the use of AI agents to assist with human tasks and augment human cognitive abilities in a variety of contexts, including decision making. It is critical that users trust these AI agents in order to use them effectively. Given that people tend to trust other people who are similar to themselves, incorporating features of one's own face into the AI agent's face may improve one's trust in the AI agent. However, it is still unclear how impressions differ when comparing agents with the same appearance as one's own and some similarities under the same conditions. Recognizing the appropriate level of similarity when using a self-similar agent is important for establishing a trustworthy agent relationship between people and the AI agent. Therefore, we investigated the effect of the degree of self-similarity of the face of the AI agent on the user's trust in the agent. We examined users' impressions of four AI agents with different degrees of face self-similarity in different scenarios. The results showed that the AI agent, whose similarity to the user's facial feature was slightly recognizable but not obvious, received higher ratings on the feeling of closeness, attractiveness, and facial preferences. These self-similar AI agents were also more trustworthy in everyday non-serious decisions and were more likely to improve people's trustworthiness in such situations. Finally, we discuss the potential applications of our findings to design real-world AI agents.

    DOI: 10.1145/3582700.3582721

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/aughuman2/ahs2023.html#NiwaMYS23

  • Masktrap: Designing and Identifying Gestures to Transform Mask Strap into an Input Interface. Reviewed

    Takumi Yamamoto, Katsutoshi Masai, Anusha Withana, Yuta Sugiura

    Proceedings of the 28th International Conference on Intelligent User Interfaces(IUI)   762 - 775   2023.3   ISBN:9798400701061

     More details

    Language:Others   Publishing type:Research paper (other academic)   Publisher:ACM  

    Embedding technology into day-to-day wearables and creating smart devices such as smartwatches and smart-glasses has been a growing area of interest. In this paper, we explore the interaction around face masks, a common accessory worn by many to prevent the spread of infectious diseases. Particularly, we propose a method of using the straps of a face mask as an input medium. We identified a set of plausible gestures on mask straps through an elicitation study (N = 20), in which the participants proposed different gestures for a given referent. We then developed a prototype to identify the gestures performed on the mask straps and present the recognition accuracy from a user study with eight participants. Our results show the system achieves 93.07% classification accuracy for 12 gestures.

    DOI: 10.1145/3581641.3584062

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/iui/iui2023.html#YamamotoMWS23

  • Unobtrusive Refractive Power Monitoring: Using EOG to Detect Blurred Vision. Reviewed

    Xin Wei, Huakun Liu, Monica Perusquía-Hernández, Katsutoshi Masai, Naoya Isoyama, Hideaki Uchiyama, Kiyoshi Kiyokawa

    EMBC   1 - 7   2023   ISSN:1557-170X ISBN:979-8-3503-2447-1 eISSN:1558-4615

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS  

    The rise in population and aging has led to a significant increase in the number of individuals affected by common causes of vision loss. Early diagnosis and treatment are crucial to avoid the consequences of visual impairment. However, in early stages, many visual problems are making it difficult to detect. Visual adaptation can compensate for several visual deficits with adaptive eye movements. These adaptive eye movements may serve as indicators of vision loss. In this work, we investigate the association between eye movement and blurred vision. By using Electrooculography (EOG) to record eye movements, we propose a new tracking model to identify the deterioration of refractive power. We verify the technical feasibility of this method by designing a blurred vision simulation experiment. Six sets of prescription lenses and a pair of flat lenses were used to create different levels of blurring effects. We analyzed binocular movements through EOG signals and performed a seven-class classification using the ResNet18 architecture. The results revealed an average classification accuracy of 94.7% in the subject-dependent model. However, the subject-independent model presented poor performance, with the highest accuracy reaching only 34.5%. Therefore, the potential of an EOG-based visual quality monitoring system is proven. Furthermore, our experimental design provides a novel approach to assessing blurred vision.

    DOI: 10.1109/EMBC40787.2023.10341004

    Web of Science

    Scopus

    PubMed

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/embc/embc2023.html#WeiLPMIUK23

  • SyncArms: Gaze-Driven Target Object-oriented Manipulation for Parallel Operation of Robot Arms in Distributed Physical Environments. Reviewed

    Koki Kawamura, Shunichi Kasahara, Masaaki Fukuoka, Katsutoshi Masai, Ryota Kondo, Maki Sugimoto

    SIGGRAPH Emerging Technologies   18 - 2   2023   ISBN:9798400701542

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:Proceedings - SIGGRAPH 2023 Emerging Technologies  

    Enhancing human capabilities through the use of multiple bodies has been a significant research agenda. When multiple bodies are synchronously operated in different environments, the differences in environment placement make it difficult to interact with objects simultaneously. In contrast, if automatic control is performed to complement the differences and to perform a parallel task, the mismatch between the user and robotic arm movements generates visuomotor incongruence, leading to a decline in embodiment across the body. This can lead to difficulty completing tasks or achieving goals, and may even cause frustration or anxiety. To address this issue, we have developed a system that allows a parallel operation of synchronized multiple robotic arms by assisting the arm towards which the user's gaze is not directed while maintaining the sense of embodiment over the robotic arms.

    DOI: 10.1145/3588037.3595401

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/siggraph/siggraph2023et.html#KawamuraKFMKS23

  • Assessing Individual Decision-Making Skill by Manipulating Predictive and Unpredictive Cues in a Virtual Baseball Batting Environment. Reviewed

    Yuhi Tani, Akemi Kobayashi, Katsutoshi Masai, Takehiro Fukuda, Maki Sugimoto, Toshitaka Kimura

    IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops(VRW)   775 - 776   2023   ISBN:9798350348392

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    We propose a virtual reality (VR) baseball batting system for assessing individual decision-making skill based on swing judgement of pitch types and the underlying prediction ability by manipulating combinations of pitching motion and ball trajectory cues. Our analy-sis of data from 10 elite baseball players revealed highly accurate swing motions in conditions during which the batter made precise swing decisions. Delays in swing motion were observed in conditions during which predictive cues were mismatched. Our findings indicated that decision-making based on pitch type influences the inherent stability of decision and accuracy of hitting, and that most batters made decisions based on pitching motion cues rather than on ball trajectory.

    DOI: 10.1109/VRW58643.2023.00230

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/vr/vr2023w.html#TaniKMFSK23

  • Consistent Smile Intensity Estimation from Wearable Optical Sensors. Reviewed

    Katsutoshi Masai, Monica Perusquía-Hernández, Maki Sugimoto, Shiro Kumano, Toshitaka Kimura

    10th International Conference on Affective Computing and Intelligent Interaction(ACII)   1 - 8   2022.9   ISBN:9781665459082

     More details

    Language:Others   Publishing type:Research paper (other academic)   Publisher:IEEE  

    Smiling plays a crucial role in human communication. It is the most frequent expression shown in daily life. Smile analysis usually employs computer vision-based methods that use data sets annotated by experts. However, cameras have space constraints in most realistic scenarios due to occlusions. Wearable electromyography is a promising alternative; however, issue of user comfort is a barrier to long-term use. Other wearable-based methods can detect smiles, but they lack consistency because they use subjective criteria without expert annotation. We investigate a wearable-based method that uses optical sensors for consistent smile intensity estimation while reducing manual annotation cost. First, we use a state-of-art computer vision method (OpenFace) to train a regression model to estimate smile intensity from sensor data. Then, we compare the estimation result to that of OpenFace. We also compared their results to human annotation. The results show that the wearable method has a higher matching coefficient (r=0.67) with human annotated smile intensity than OpenFace (r=0.56). Also, when the sensor data and OpenFace output were fused, the multimodal method produced estimates closer to human annotation (r=0.74). Finally, we investigate how the synchrony of smile dynamics among subjects and their average smile intensity are correlated to assess the potential of wearable smile intensity estimation.

    DOI: 10.1109/ACII55700.2022.9953867

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/acii/acii2022.html#MasaiPSKK22

  • AnkleSens: Foot Posture Prediction Using Photo Reflective Sensors on Ankle. Reviewed

    Kosuke Kikui, Katsutoshi Masai, Tomoya Sasaki, Masahiko Inami, Maki Sugimoto

    IEEE Access   10   33111 - 33122   2022.3

     More details

    Language:Others   Publishing type:Research paper (scientific journal)   Publisher:Institute of Electrical and Electronics Engineers ({IEEE})  

    Recognizing foot gestures can be useful for subtle inputs to appliances and machines in everyday life, but for a system to be useful, it must allow users to assume various postures and work in different spaces. Camera-based and pressure-based systems have limitations in these areas. In this paper, we introduce AnkleSens, a novel ankle-worn foot sensing device that estimates a variety of foot postures using photo reflective sensors. Since our device is not placed between the foot and the floor, it can predict foot posture, even if we keep the foot floating in the air. We developed a band prototype with 16 sensors that can be wrapped around the leg above the ankle. To evaluate the performance of the proposed method, we used eight foot postures and four foot states as preliminary classes. After assessing a test dataset with the preliminary classes, we integrated the eight foot postures into five. Finally, we classified the dataset with five postures in four foot states. For the resulting 20 classes, the average classification accuracy with our proposed method was 79.57% with user-dependent training. This study showed the potential of foot posture sensing as a new subtle input method in daily life.

    DOI: 10.1109/ACCESS.2022.3158158

    Scopus

    researchmap

  • High-Speed Thermochromism Control Method Integrating Water Cooling Circuits and Electric Heating Circuits Printed with Conductive Silver Nanoparticle Ink. Reviewed

    Motoyasu Masui, Yoshinari Takegawa, Yutaka Tokuda, Yuta Sugiura, Katsutoshi Masai, Keiji Hirata 0001

    Human-Computer Interaction. Technological Innovation - Thematic Area   63 ( 2 )   66 - 80   2022.2   ISSN:1882-7764 ISBN:9783031054082, 9783031054099

     More details

    Language:Others   Publishing type:Research paper (scientific journal)   Publisher:Springer  

    High-Speed Thermochromism Control Method Integrating Water Cooling Circuits and Electric Heating Circuits Printed with Conductive Silver Nanoparticle Ink.

    DOI: 10.1007/978-3-031-05409-9_6

    Scopus

    J-GLOBAL

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2022-2.html#MasuiTTSMH22

  • Virtual Reality Sonification Training System Can Improve a Novice's Forehand Return of Serve in Tennis.

    Katsutoshi Masai, Takuma Kajiyama, Tadashi Muramatsu, Maki Sugimoto, Toshitaka Kimura

    2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)   845 - 849   2022   ISBN:9781665453653

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    Virtual reality (VR) is gaining interest as a platform for sports skills training. VR allows for information manipulation and feedback that would be difficult in reality. This is particularly useful in open skill sports where players must adjust their behavior in response to environmental factors. Auditory feedback (sonification) is constructive for sports training in VR. However, this has not been well studied in open skill-specific situations due to the difficulty of accounting for environmental factors in reality. This study focuses on a serve return, an important scene in tennis. It investigates the effects of sonification on the forehand return stroke in VR by comparing score displays and trajectory visualizations. We designed the sonification based on the difference between experienced and novice players' movements in VR. We then conducted a between-subjects experiment to investigate the effect of the sonification (N=20). The results showed that the system with sonification effectively improved the timing of hip movement for preparing a slow serve return compared to the system without sonification.

    DOI: 10.1109/ISMAR-Adjunct57072.2022.00182

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ismar/ismar2022a.html#MasaiKMSK22

  • PerformEyebrow: Design and Implementation of an Artificial Eyebrow Device Enabling Augmented Facial Expression.

    Motoyasu Masui, Yoshinari Takegawa, Nonoka Nitta, Yutaka Tokuda, Yuta Sugiura, Katsutoshi Masai, Keiji Hirata 0001

    Human-Computer Interaction. Design and User Experience Case Studies - Thematic Area   12764 LNCS   584 - 597   2021.7

     More details

    Language:Others   Publishing type:Research paper (other academic)  

    DOI: 10.1007/978-3-030-78468-3_40

  • Digital Full-Face Mask Display with Expression Recognition using Embedded Photo Reflective Sensor Arrays

    Yoshinari Takegawa, Yutaka Tokuda, Akino Umezawa, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto, Diego Martinez Plasencia, Sriram Subramanian, Keiji Hirata

    2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)   2020.11

     More details

    Language:Others   Publishing type:Research paper (other academic)  

    DOI: 10.1109/ismar50242.2020.00030

  • Face Commands - User-Defined Facial Gestures for Smart Glasses.

    Katsutoshi Masai, Kai Kunze, Daisuke Sakamoto, Yuta Sugiura, Maki Sugimoto

    2020 IEEE International Symposium on Mixed and Augmented Reality(ISMAR)   374 - 386   2020.10

     More details

    Language:Others   Publishing type:Research paper (other academic)  

    DOI: 10.1109/ISMAR50242.2020.00064

  • Eye-based Interaction Using Embedded Optical Sensors on an Eyewear Device for Facial Expression Recognition.

    Katsutoshi Masai, Kai Kunze, Maki Sugimoto

    Proceedings of the Augmented Humans International Conference(AHs)   1 - 10   2020.3

     More details

    Language:Others   Publishing type:Research paper (other academic)  

    DOI: 10.1145/3384657.3384787

  • Classification of Spontaneous and Posed Smiles by Photo-reflective Sensors Embedded with Smart Eyewear.

    Chisa Saito, Katsutoshi Masai, Maki Sugimoto

    TEI '20: Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction(TEI)   45 - 52   2020.2

     More details

    Language:Others   Publishing type:Research paper (other academic)  

    DOI: 10.1145/3374920.3374936

▼display all

Presentations

  • Koki Kawamura, Shunichi Kasahara, Masaaki Fukuoka, Katsutoshi Masai, Ryota Kondo, Maki Sugimoto International conference

    @Koki Kawamura, @Shunichi Kasahara, @Masaaki Fukuoka, Katsutoshi Masai, @Ryota Kondo, @Maki Sugimoto

    ACM SIGGRAPH 2023 Emerging Technologies  2023.7 

     More details

    Event date: 2024.8

    Language:English  

    Venue:Los Angeles   Country:United States  

    Other Link: https://dl.acm.org/doi/abs/10.1145/3588037.3595401

  • Seamless Avatar Expression Transfer: Merging Camera and Smart Eyewear Embedded with Photo-Reflective Sensor Technologies International conference

    Katsutoshi Masai

    IEEE VR  2024.3 

     More details

    Event date: 2024.3

    Language:English   Presentation type:Symposium, workshop panel (public)  

    Venue:Orland, Florida   Country:United States  

    Repository Public URL: https://hdl.handle.net/2324/7173605

  • Unobtrusive Refractive Power Monitoring: Using EOG to Detect Blurred Visio International conference

    @Xin Wei, @Huakun Liu,@Monica Perusquía-Hernández,Katsutoshi Masai,@Naoya Isoyama,@Hideaki Uchiyama,@Kiyoshi Kiyokawa

    Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) Copy Persistent Link Browse Title List Sign up for Conference Alerts  2023.7 

     More details

    Event date: 2023.7

    Language:English   Presentation type:Oral presentation (general)  

    Venue:シドニー   Country:Australia  

    The rise in population and aging has led to a significant increase in the number of individuals affected by common causes of vision loss. Early diagnosis and treatment are crucial to avoid the consequences of visual impairment. However, in early stages, many visual problems are making it difficult to detect. Visual adaptation can compensate for several visual deficits with adaptive eye movements. These adaptive eye movements may serve as indicators of vision loss. In this work, we investigate the association between eye movement and blurred vision. By using Electrooculography (EOG) to record eye movements, we propose a new tracking model to identify the deterioration of refractive power. We verify the technical feasibility of this method by designing a blurred vision simulation experiment. Six sets of prescription lenses and a pair of flat lenses were used to create different levels of blurring effects. We analyzed binocular movements through EOG signals and performed a seven-class classification using the ResNet18 architecture. The results revealed an average classification accuracy of 94.7% in the subject-dependent model. However, the subject-independent model presented poor performance, with the highest accuracy reaching only 34.5%. Therefore, the potential of an EOG-based visual quality monitoring system is proven. Furthermore, our experimental design provides a novel approach to assessing blurred vision.

    Other Link: https://ieeexplore.ieee.org/abstract/document/10341004

  • Optical sensor-based smart eyewear for facial expression recognition Invited

    2nd NAIST International Symposium on Data Science  2022.10 

     More details

    Presentation type:Symposium, workshop panel (nominated)  

    researchmap

MISC

  • 光学透過型HMDを用いたAR-SSVEPにおける背景と奥行きの影響の検討

    坪地航己, 小林明美, 小林明美, 正井克俊, 正井克俊, 杉本麻樹, 木村聡貴, 木村聡貴

    日本バーチャルリアリティ学会大会論文集(CD-ROM)   27th   2022   ISSN:1349-5062

  • Gesture Definition for Turning Mask Strings into Input Interface and Prototyping

    山本匠, 正井克俊, WITHANA Anusha, 杉浦裕太

    情報処理学会研究報告(Web)   2022 ( EC-65 )   2022

  • バーチャル環境を用いた野球打撃の認知運動スキルの評価

    谷湧日, 小林明美, 小林明美, 福田岳洋, 正井克俊, 正井克俊, 杉本麻樹, 木村聡貴, 木村聡貴

    日本バーチャルリアリティ学会大会論文集(CD-ROM)   27th   2022   ISSN:1349-5062

Industrial property rights

Patent   Number of applications: 1   Number of registrations: 0
Utility model   Number of applications: 0   Number of registrations: 0
Design   Number of applications: 0   Number of registrations: 0
Trademark   Number of applications: 0   Number of registrations: 0

Professional Memberships

  • IEEE

Committee Memberships

  • 複合現実感研究委員会  

    2025.1 - Present   

Academic Activities

  • program committee member International contribution

    the 25th ACM International Conference on Multimodal Interaction (ICMI 2023)  ( Paris, France on October 9-13, 2023. France ) 2024.10

     More details

    Type:Competition, symposium, etc. 

  • program committee member International contribution

    ACM TEI 2024  ( Cork, Ireland Ireland ) 2024.2

     More details

    Type:Competition, symposium, etc. 

Research Projects

  • ウェアラブル光センシングに基づく集合的微表情解析基盤の構築

    Grant number:25K03158  2025.4 - 2030.3

    Grants-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research (B)

    杉本 麻樹, 正井 克俊, 中村 文彦

      More details

    Authorship:Coinvestigator(s)  Grant type:Scientific research funding

    本研究は,ウェアラブルデバイスと機械学習を用いて,高時間分解能のセンサデータから微表情を認識する手法を開発する.眼鏡フレームなどの装着型の装置に組み込んだ低次元のセンサ情報を用いることで,日常環境でも装着者の行動の妨げにならない計測基盤技術を開発し,複数人の表情情報を解析して共在感や共感性を定量的に評価する.また,サイバー・フィジカル空間における社会的インタラクションの解析を通じて,開発した基盤技術が多様な環境で機能することを実証する.

  • Manipulation theory applying ambiguity between Umwelt and sound cognition

    Grant number:24H00892  2024.4 - 2027.3

    Grants-in-Aid for Scientific Research  Grant-in-Aid for Transformative Research Areas (B)

    善甫 啓一, 福嶋 政期, 正井 克俊, 若槻 尚斗, 矢野 博明, 三浦 智史, 高橋 裕紀, 伴 祐樹

      More details

    Authorship:Coinvestigator(s)  Grant type:Scientific research funding

    本研究では,音声認識AI,対話型AI,生成系AI等の活用で,通常は他者から観測が出来ない個々人の環世界をVR空間としてデータ化し,さらにその領域ごとの曖昧性を定義することで,個人の認知変容・介入のための音を使った環世界マニピュレーションの基礎的な方法の探索を行う。これにより,自然言語で対話可能な者(人間)の環世界を観測し,望ましくない認知の歪みの補正方法を計算可能とすることで,A02班_井野ともに治療への応用を図る。また,A03班_坂口と,睡眠中における音刺激による環世界マニュピレーション効果を検証する。

  • 装着型装置を用いた日常における自然な表情計測技術の構築

    Grant number:18J12580  2018 - 2019

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research (C)

      More details

    Grant type:Scientific research funding

Educational Activities

  • データサイエンス実践I~Ⅳ
    データサイエンス発展I,Ⅱ
    プログラミング演習

Class subject

  • データサイエンス発展Ⅰ~Ⅱ

    2024.10 - 2025.3   Second semester

  • プログラミング演習(P)

    2024.6 - 2024.8   Summer quarter

  • データサイエンス実践Ⅰ~Ⅳ

    2024.4 - 2024.9   First semester

  • プログラミング演習(P)

    2025.6 - 2025.8   Summer quarter

  • プログラミング演習(P)

    2024.6 - 2024.8   Summer quarter

FD Participation

  • 2023.11   Role:Participation   Title:【シス情FD】企業等との共同研究の実施増加に向けて

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2023.10   Role:Participation   Title:【シス情FD】価値創造型半導体人材育成センターについて

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2023.7   Role:Participation   Title:【シス情FD】若手教員の研究紹介⑨

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2023.6   Role:Participation   Title:【シス情FD】SBRC、QRECの活動ご紹介

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2023.5   Role:Participation   Title:【シス情FD】農学研究院で進めているDX教育について

    Organizer:[Undergraduate school/graduate school/graduate faculty]

  • 2023.4   Role:Participation   Title:令和5年度 第1回全学FD(新任教員の研修)The 1st All-University FD (training for new faculty members) in FY2023

    Organizer:University-wide

  • 2023.4   Role:Participation   Title:令和5年度 第1回全学FD(新任教員の研修)The 1st All-University FD (training for new faculty members) in FY2023

    Organizer:University-wide

▼display all

Visiting, concurrent, or part-time lecturers at other universities, institutions, etc.

  • 2024  慶應義塾大学理工学部情報理工学科  Classification:Affiliate faculty  Domestic/International Classification:Japan 

    Semester, Day Time or Duration:通年

  • 2023  慶應義塾大学理工学部情報理工学科  Classification:Affiliate faculty  Domestic/International Classification:Japan 

    Semester, Day Time or Duration:通年

Travel Abroad

  • 2015.12 - 2016.2

    Staying countory name 1:Australia   Staying institution name 1:University of South Australia