2024/09/27 更新

お知らせ

 

写真a

マサイ カツトシ
正井 克俊
MASAI KATSUTOSHI
所属
システム情報科学研究院 情報知能工学部門 助教
工学部 電気情報工学科(併任)
システム情報科学府 情報理工学専攻(併任)
職名
助教
連絡先
メールアドレス
電話番号
09063705657
プロフィール
光センサを用いたウェアラブル表情識別 VRにおけるコミュニケーションの円滑化の手法の検討 スポーツ技能取得支援のためのHCI/VR技術
外部リンク

学位

  • 博士(工学)

研究テーマ・研究キーワード

  • 研究テーマ:ウェアラブル表情認識システムの研究 技能獲得支援のHCI/VR/IoTシステム

    研究キーワード:Human Computer Interaction, Affective Computing, Wearable Computing

    研究期間: 2023年6月 - 2025年12月

受賞

  • 第19回日本バーチャルリアリティ学会論文賞

    2017年9月   バーチャルリアリティ学会   正井克俊,杉浦裕太,尾形正泰,クンツェ カイ,稲見昌彦,杉本麻樹「AffectiveWear: 装着者の日常的な表情を認識する眼鏡型装置」(論文誌,Vol.21, No.2 pp.-385-394,2016)

論文

  • Seamless Avatar Expression Transfer: Merging Camera and Smart Eyewear Embedded with Photo-Reflective Sensor Technologies

    Masai Katsutoshi

    2024年3月

     詳細を見る

    記述言語:英語  

    Virtual Reality (VR) offers new ways to interact that are different from usual communication in the real world. In VR, avatars are key displays for showing users’ feelings and personalities non-verbally. Addressing the need for more convenient and continuous expression representation, we introduce a novel method for facial expression transfer in VR. This method integrates camera systems with photoreflective sensors embedded in eyewear, overcoming the limitations of traditional camera-based tracking. By offering smoother tracking and reducing manual calibration needs, this approach highlights the potential of multimodal technology to enhance non-verbal communication in virtual environments. Building on this, we demonstrated the example implementation of smile transfer and discussed future direction.

    CiNii Research

  • 顔の自己類似度が AI エージェントの印象に与える影響の解析 査読

    @丹羽将康, 正井克俊, @吉田成朗, @杉本麻樹

    情報処理学会論文誌   2024年1月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)  

  • Seamless Avatar Expression Transfer: Merging Camera and Smart Eyewear Embedded with Photo-Reflective Sensor Technologies

    Masai, K

    2024 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW 2024   591 - 593   2024年   ISBN:979-8-3503-7450-6

     詳細を見る

    出版者・発行元:Proceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024  

    Virtual Reality (VR) offers new ways to interact that are different from usual communication in the real world. In VR, avatars are key displays for showing users' feelings and personalities non-verbally. Addressing the need for more convenient and continuous expression representation, we introduce a novel method for facial expression transfer in VR. This method integrates camera systems with photo-reflective sensors embedded in eyewear, overcoming the limitations of traditional camera-based tracking. By offering smoother tracking and reducing manual calibration needs, this approach highlights the potential of multimodal technology to enhance non-verbal commu-nication in virtual environments. Building on this, we demonstrated the example implementation of smile transfer and discussed future direction.

    DOI: 10.1109/VRW62533.2024.00114

    Web of Science

    Scopus

  • Analyzing the Effect of Diverse Gaze and Head Direction on Facial Expression Recognition With Photo-Reflective Sensors Embedded in a Head-Mounted Display

    Nakamura F., Murakami M., Suzuki K., Fukuoka M., Masai K., Sugimoto M.

    IEEE Transactions on Visualization and Computer Graphics   29 ( 10 )   4124 - 4139   2023年10月   ISSN:1941-0506

     詳細を見る

    記述言語:その他   掲載種別:研究論文(学術雑誌)   出版者・発行元:IEEE Transactions on Visualization and Computer Graphics  

    As one of the facial expression recognition techniques for Head-Mounted Display (HMD) users, embedded photo-reflective sensors have been used. In this paper, we investigate how gaze and face directions affect facial expression recognition using the embedded photo-reflective sensors. First, we collected a dataset of five facial expressions (Neutral, Happy, Angry, Sad, Surprised) while looking in diverse directions by moving 1) the eyes and 2) the head. Using the dataset, we analyzed the effect of gaze and face directions by constructing facial expression classifiers in five ways and evaluating the classification accuracy of each classifier. The results revealed that the single classifier that learned the data for all gaze points achieved the highest classification performance. Then, we investigated which facial part was affected by the gaze and face direction. The results showed that the gaze directions affected the upper facial parts, while the face directions affected the lower facial parts. In addition, by removing the bias of facial expression reproducibility, we investigated the pure effect of gaze and face directions in three conditions. The results showed that, in terms of gaze direction, building classifiers for each direction significantly improved the classification accuracy. However, in terms of face directions, there were slight differences between the classifier conditions. Our experimental results implied that multiple classifiers corresponding to multiple gaze and face directions improved facial expression recognition accuracy, but collecting the data of the vertical movement of gaze and face is a practical solution to improving facial expression recognition accuracy.

    DOI: 10.1109/TVCG.2022.3179766

    Scopus

    researchmap

  • Investigating Effects of Facial Self-Similarity Levels on the Impression of Virtual Agents in Serious/Non-Serious Contexts

    Niwa M., Masai K., Yoshida S., Sugimoto M.

    ACM International Conference Proceeding Series   221 - 230   2023年3月   ISBN:9781450399845

     詳細を見る

    記述言語:その他   掲載種別:研究論文(その他学術会議資料等)   出版者・発行元:ACM International Conference Proceeding Series  

    Recent technological advances have enabled the use of AI agents to assist with human tasks and augment human cognitive abilities in a variety of contexts, including decision making. It is critical that users trust these AI agents in order to use them effectively. Given that people tend to trust other people who are similar to themselves, incorporating features of one's own face into the AI agent's face may improve one's trust in the AI agent. However, it is still unclear how impressions differ when comparing agents with the same appearance as one's own and some similarities under the same conditions. Recognizing the appropriate level of similarity when using a self-similar agent is important for establishing a trustworthy agent relationship between people and the AI agent. Therefore, we investigated the effect of the degree of self-similarity of the face of the AI agent on the user's trust in the agent. We examined users' impressions of four AI agents with different degrees of face self-similarity in different scenarios. The results showed that the AI agent, whose similarity to the user's facial feature was slightly recognizable but not obvious, received higher ratings on the feeling of closeness, attractiveness, and facial preferences. These self-similar AI agents were also more trustworthy in everyday non-serious decisions and were more likely to improve people's trustworthiness in such situations. Finally, we discuss the potential applications of our findings to design real-world AI agents.

    DOI: 10.1145/3582700.3582721

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/aughuman2/ahs2023.html#NiwaMYS23

  • Masktrap: Designing and Identifying Gestures to Transform Mask Strap into an Input Interface

    Yamamoto T., Masai K., Withana A., Sugiura Y.

    International Conference on Intelligent User Interfaces, Proceedings IUI   762 - 775   2023年3月   ISBN:9798400701061

     詳細を見る

    記述言語:その他   掲載種別:研究論文(その他学術会議資料等)   出版者・発行元:International Conference on Intelligent User Interfaces, Proceedings IUI  

    Embedding technology into day-to-day wearables and creating smart devices such as smartwatches and smart-glasses has been a growing area of interest. In this paper, we explore the interaction around face masks, a common accessory worn by many to prevent the spread of infectious diseases. Particularly, we propose a method of using the straps of a face mask as an input medium. We identified a set of plausible gestures on mask straps through an elicitation study (N = 20), in which the participants proposed different gestures for a given referent. We then developed a prototype to identify the gestures performed on the mask straps and present the recognition accuracy from a user study with eight participants. Our results show the system achieves 93.07% classification accuracy for 12 gestures.

    DOI: 10.1145/3581641.3584062

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/iui/iui2023.html#YamamotoMWS23

  • Unobtrusive Refractive Power Monitoring: Using EOG to Detect Blurred Vision

    Wei, X; Liu, HK; Perusquía-Hernández, M; Masai, K; Isoyama, N; Uchiyama, H; Kiyokawa, K

    2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC   1 - 7   2023年   ISSN:1557-170X ISBN:979-8-3503-2447-1 eISSN:1558-4615

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS  

    The rise in population and aging has led to a significant increase in the number of individuals affected by common causes of vision loss. Early diagnosis and treatment are crucial to avoid the consequences of visual impairment. However, in early stages, many visual problems are making it difficult to detect. Visual adaptation can compensate for several visual deficits with adaptive eye movements. These adaptive eye movements may serve as indicators of vision loss. In this work, we investigate the association between eye movement and blurred vision. By using Electrooculography (EOG) to record eye movements, we propose a new tracking model to identify the deterioration of refractive power. We verify the technical feasibility of this method by designing a blurred vision simulation experiment. Six sets of prescription lenses and a pair of flat lenses were used to create different levels of blurring effects. We analyzed binocular movements through EOG signals and performed a seven-class classification using the ResNet18 architecture. The results revealed an average classification accuracy of 94.7% in the subject-dependent model. However, the subject-independent model presented poor performance, with the highest accuracy reaching only 34.5%. Therefore, the potential of an EOG-based visual quality monitoring system is proven. Furthermore, our experimental design provides a novel approach to assessing blurred vision.

    DOI: 10.1109/EMBC40787.2023.10341004

    Web of Science

    Scopus

    PubMed

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/embc/embc2023.html#WeiLPMIUK23

  • SyncArms: Gaze-Driven Target Object-oriented Manipulation for Parallel Operation of Robot Arms in Distributed Physical Environments

    Kawamura K., Kasahara S., Fukuoka M., Masai K., Kondo R., Sugimoto M.

    Proceedings - SIGGRAPH 2023 Emerging Technologies   18 - 2   2023年   ISBN:9798400701542

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Proceedings - SIGGRAPH 2023 Emerging Technologies  

    Enhancing human capabilities through the use of multiple bodies has been a significant research agenda. When multiple bodies are synchronously operated in different environments, the differences in environment placement make it difficult to interact with objects simultaneously. In contrast, if automatic control is performed to complement the differences and to perform a parallel task, the mismatch between the user and robotic arm movements generates visuomotor incongruence, leading to a decline in embodiment across the body. This can lead to difficulty completing tasks or achieving goals, and may even cause frustration or anxiety. To address this issue, we have developed a system that allows a parallel operation of synchronized multiple robotic arms by assisting the arm towards which the user's gaze is not directed while maintaining the sense of embodiment over the robotic arms.

    DOI: 10.1145/3588037.3595401

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/siggraph/siggraph2023et.html#KawamuraKFMKS23

  • Assessing Individual Decision-Making Skill by Manipulating Predictive and Unpredictive Cues in a Virtual Baseball Batting Environment

    Tani Y., Kobayashi A., Masai K., Fukuda T., Sugimoto M., Kimura T.

    Proceedings - 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2023   775 - 776   2023年   ISBN:9798350348392

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Proceedings - 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2023  

    We propose a virtual reality (VR) baseball batting system for assessing individual decision-making skill based on swing judgement of pitch types and the underlying prediction ability by manipulating combinations of pitching motion and ball trajectory cues. Our analy-sis of data from 10 elite baseball players revealed highly accurate swing motions in conditions during which the batter made precise swing decisions. Delays in swing motion were observed in conditions during which predictive cues were mismatched. Our findings indicated that decision-making based on pitch type influences the inherent stability of decision and accuracy of hitting, and that most batters made decisions based on pitching motion cues rather than on ball trajectory.

    DOI: 10.1109/VRW58643.2023.00230

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/vr/vr2023w.html#TaniKMFSK23

  • Consistent Smile Intensity Estimation from Wearable Optical Sensors

    Katsutoshi Masai, Monica Perusquía-Hernández, Maki Sugimoto, Shiro Kumano, Toshitaka Kimura

    2022 10th International Conference on Affective Computing and Intelligent Interaction, ACII 2022   1 - 8   2022年9月   ISBN:9781665459082

     詳細を見る

    記述言語:その他   掲載種別:研究論文(その他学術会議資料等)   出版者・発行元:2022 10th International Conference on Affective Computing and Intelligent Interaction, ACII 2022  

    Smiling plays a crucial role in human communication. It is the most frequent expression shown in daily life. Smile analysis usually employs computer vision-based methods that use data sets annotated by experts. However, cameras have space constraints in most realistic scenarios due to occlusions. Wearable electromyography is a promising alternative; however, issue of user comfort is a barrier to long-term use. Other wearable-based methods can detect smiles, but they lack consistency because they use subjective criteria without expert annotation. We investigate a wearable-based method that uses optical sensors for consistent smile intensity estimation while reducing manual annotation cost. First, we use a state-of-art computer vision method (OpenFace) to train a regression model to estimate smile intensity from sensor data. Then, we compare the estimation result to that of OpenFace. We also compared their results to human annotation. The results show that the wearable method has a higher matching coefficient (r=0.67) with human annotated smile intensity than OpenFace (r=0.56). Also, when the sensor data and OpenFace output were fused, the multimodal method produced estimates closer to human annotation (r=0.74). Finally, we investigate how the synchrony of smile dynamics among subjects and their average smile intensity are correlated to assess the potential of wearable smile intensity estimation.

    DOI: 10.1109/ACII55700.2022.9953867

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/acii/acii2022.html#MasaiPSKK22

  • AnkleSens: Foot Posture Prediction Using Photo Reflective Sensors on Ankle

    Kikui K., Masai K., Sasaki T., Inami M., Sugimoto M.

    IEEE Access   10   33111 - 33122   2022年3月

     詳細を見る

    記述言語:その他   掲載種別:研究論文(学術雑誌)   出版者・発行元:IEEE Access  

    Recognizing foot gestures can be useful for subtle inputs to appliances and machines in everyday life, but for a system to be useful, it must allow users to assume various postures and work in different spaces. Camera-based and pressure-based systems have limitations in these areas. In this paper, we introduce AnkleSens, a novel ankle-worn foot sensing device that estimates a variety of foot postures using photo reflective sensors. Since our device is not placed between the foot and the floor, it can predict foot posture, even if we keep the foot floating in the air. We developed a band prototype with 16 sensors that can be wrapped around the leg above the ankle. To evaluate the performance of the proposed method, we used eight foot postures and four foot states as preliminary classes. After assessing a test dataset with the preliminary classes, we integrated the eight foot postures into five. Finally, we classified the dataset with five postures in four foot states. For the resulting 20 classes, the average classification accuracy with our proposed method was 79.57% with user-dependent training. This study showed the potential of foot posture sensing as a new subtle input method in daily life.

    DOI: 10.1109/ACCESS.2022.3158158

    Scopus

    researchmap

  • 水冷回路と導電性銀ナノ粒子インク印刷による電熱回路を統合した高速サーモクロミズム制御手法

    増井元康, 竹川佳成, 徳田雄嵩, 杉浦裕太, 正井克俊, 正井克俊, 平田圭二

    情報処理学会論文誌ジャーナル(Web)   63 ( 2 )   66 - 80   2022年2月   ISSN:1882-7764 ISBN:9783031054082, 9783031054099

     詳細を見る

    記述言語:その他   掲載種別:研究論文(学術雑誌)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    With the widespread use of inkjet-printable conductive silver nanoparticle inks, lightweight, thin, and portable wearable displays that combine electrothermal circuit patterns and thermochromic inks have attracted much attention in recent years. Thermochromic displays, which undergo reversible color change according to temperature change, have the problem of low responsiveness due to the delay of heating and cooling. In this study, we propose a high-speed thermochromism control method that integrates a water-cooling circuit and an electric heating circuit using silver nanoparticle ink printing. As an evaluation experiment, we compared the cooling time of an electro-thermal pattern with and without the water-cooling circuit and verified the usefulness of the proposed method. In addition, we have developed applications such as PerformEyebrow, an artificial eyebrow device that extends facial expressions, and dynamic masks and questionnaires based on thermochromism, which demonstrate the potential of our high-speed color control method as a new media technology.

    DOI: 10.1007/978-3-031-05409-9_6

    Scopus

    J-GLOBAL

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/hci/hci2022-2.html#MasuiTTSMH22

  • Virtual Reality Sonification Training System Can Improve a Novice's Forehand Return of Serve in Tennis

    Masai K., Kajiyama T., Muramatsu T., Sugimoto M., Kimura T.

    Proceedings - 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct, ISMAR-Adjunct 2022   845 - 849   2022年   ISBN:9781665453653

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Proceedings - 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct, ISMAR-Adjunct 2022  

    Virtual reality (VR) is gaining interest as a platform for sports skills training. VR allows for information manipulation and feedback that would be difficult in reality. This is particularly useful in open skill sports where players must adjust their behavior in response to environmental factors. Auditory feedback (sonification) is constructive for sports training in VR. However, this has not been well studied in open skill-specific situations due to the difficulty of accounting for environmental factors in reality. This study focuses on a serve return, an important scene in tennis. It investigates the effects of sonification on the forehand return stroke in VR by comparing score displays and trajectory visualizations. We designed the sonification based on the difference between experienced and novice players' movements in VR. We then conducted a between-subjects experiment to investigate the effect of the sonification (N=20). The results showed that the system with sonification effectively improved the timing of hip movement for preparing a slow serve return compared to the system without sonification.

    DOI: 10.1109/ISMAR-Adjunct57072.2022.00182

    Scopus

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/ismar/ismar2022a.html#MasaiKMSK22

  • PerformEyebrow: Design and Implementation of an Artificial Eyebrow Device Enabling Augmented Facial Expression.

    Motoyasu Masui, Yoshinari Takegawa, Nonoka Nitta, Yutaka Tokuda, Yuta Sugiura, Katsutoshi Masai, Keiji Hirata 0001

    Human-Computer Interaction. Design and User Experience Case Studies - Thematic Area   12764 LNCS   584 - 597   2021年7月

     詳細を見る

    記述言語:その他   掲載種別:研究論文(その他学術会議資料等)  

    DOI: 10.1007/978-3-030-78468-3_40

  • Digital Full-Face Mask Display with Expression Recognition using Embedded Photo Reflective Sensor Arrays

    Yoshinari Takegawa, Yutaka Tokuda, Akino Umezawa, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto, Diego Martinez Plasencia, Sriram Subramanian, Keiji Hirata

    2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)   2020年11月

     詳細を見る

    記述言語:その他   掲載種別:研究論文(その他学術会議資料等)  

    DOI: 10.1109/ismar50242.2020.00030

  • Face Commands - User-Defined Facial Gestures for Smart Glasses.

    Katsutoshi Masai, Kai Kunze, Daisuke Sakamoto, Yuta Sugiura, Maki Sugimoto

    2020 IEEE International Symposium on Mixed and Augmented Reality(ISMAR)   374 - 386   2020年10月

     詳細を見る

    記述言語:その他   掲載種別:研究論文(その他学術会議資料等)  

    DOI: 10.1109/ISMAR50242.2020.00064

  • Eye-based Interaction Using Embedded Optical Sensors on an Eyewear Device for Facial Expression Recognition.

    Katsutoshi Masai, Kai Kunze, Maki Sugimoto

    Proceedings of the Augmented Humans International Conference(AHs)   1 - 10   2020年3月

     詳細を見る

    記述言語:その他   掲載種別:研究論文(その他学術会議資料等)  

    DOI: 10.1145/3384657.3384787

  • Classification of Spontaneous and Posed Smiles by Photo-reflective Sensors Embedded with Smart Eyewear.

    Chisa Saito, Katsutoshi Masai, Maki Sugimoto

    TEI '20: Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction(TEI)   45 - 52   2020年2月

     詳細を見る

    記述言語:その他   掲載種別:研究論文(その他学術会議資料等)  

    DOI: 10.1145/3374920.3374936

▼全件表示

講演・口頭発表等

  • Koki Kawamura, Shunichi Kasahara, Masaaki Fukuoka, Katsutoshi Masai, Ryota Kondo, Maki Sugimoto 国際会議

    @Koki Kawamura, @Shunichi Kasahara, @Masaaki Fukuoka, Katsutoshi Masai, @Ryota Kondo, @Maki Sugimoto

    ACM SIGGRAPH 2023 Emerging Technologies  2023年7月 

     詳細を見る

    開催年月日: 2024年4月

    記述言語:英語  

    国名:アメリカ合衆国  

    その他リンク: https://dl.acm.org/doi/abs/10.1145/3588037.3595401

  • Seamless Avatar Expression Transfer: Merging Camera and Smart Eyewear Embedded with Photo-Reflective Sensor Technologies 国際会議

    Katsutoshi Masai

    IEEE VR  2024年3月 

     詳細を見る

    開催年月日: 2024年3月

    記述言語:英語   会議種別:シンポジウム・ワークショップ パネル(公募)  

    国名:アメリカ合衆国  

    リポジトリ公開URL: https://hdl.handle.net/2324/7173605

  • Unobtrusive Refractive Power Monitoring: Using EOG to Detect Blurred Visio 国際会議

    Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) Copy Persistent Link Browse Title List Sign up for Conference Alerts  2023年7月 

     詳細を見る

    開催年月日: 2023年7月

    記述言語:英語   会議種別:口頭発表(一般)  

    国名:オーストラリア連邦  

    その他リンク: https://ieeexplore.ieee.org/abstract/document/10341004

MISC

  • 光学透過型HMDを用いたAR-SSVEPにおける背景と奥行きの影響の検討

    坪地航己, 小林明美, 小林明美, 正井克俊, 正井克俊, 杉本麻樹, 木村聡貴, 木村聡貴

    日本バーチャルリアリティ学会大会論文集(CD-ROM)   27th   2022年   ISSN:1349-5062

     詳細を見る

  • マスクの紐の入力インタフェース化に向けたジェスチャの検討とプロトタイプの開発

    山本匠, 正井克俊, WITHANA Anusha, 杉浦裕太

    情報処理学会研究報告(Web)   2022 ( EC-65 )   2022年

     詳細を見る

  • バーチャル環境を用いた野球打撃の認知運動スキルの評価

    谷湧日, 小林明美, 小林明美, 福田岳洋, 正井克俊, 正井克俊, 杉本麻樹, 木村聡貴, 木村聡貴

    日本バーチャルリアリティ学会大会論文集(CD-ROM)   27th   2022年   ISSN:1349-5062

     詳細を見る

産業財産権

特許権   出願件数: 1件   登録件数: 0件
実用新案権   出願件数: 0件   登録件数: 0件
意匠権   出願件数: 0件   登録件数: 0件
商標権   出願件数: 0件   登録件数: 0件

所属学協会

  • IEEE

学術貢献活動

  • program committee member 国際学術貢献

    the 25th ACM International Conference on Multimodal Interaction (ICMI 2023)  ( Paris, France on October 9-13, 2023. France ) 2024年10月

     詳細を見る

    種別:大会・シンポジウム等 

  • program committee member 国際学術貢献

    ACM TEI 2024  ( Cork, Ireland Ireland ) 2024年2月

     詳細を見る

    種別:大会・シンポジウム等 

共同研究・競争的資金等の研究課題

  • 装着型装置を用いた日常における自然な表情計測技術の構築

    研究課題/領域番号:18J12580  2018年 - 2019年

    日本学術振興会  科学研究費助成事業  基盤研究(C)

      詳細を見る

    資金種別:科研費

教育活動概要

  • データサイエンス実践I~Ⅳ
    データサイエンス発展I,Ⅱ

担当授業科目

  • データサイエンス発展Ⅰ~Ⅱ

    2024年10月 - 2025年3月   後期

  • プログラミング演習(P)

    2024年6月 - 2024年8月   夏学期

  • データサイエンス実践Ⅰ~Ⅳ

    2024年4月 - 2024年9月   前期

  • プログラミング演習(P)

    2024年6月 - 2024年8月   夏学期

FD参加状況

  • 2023年11月   役割:参加   名称:【シス情FD】企業等との共同研究の実施増加に向けて

    主催組織:部局

  • 2023年10月   役割:参加   名称:【シス情FD】価値創造型半導体人材育成センターについて

    主催組織:部局

  • 2023年7月   役割:参加   名称:【シス情FD】若手教員の研究紹介⑨

    主催組織:部局

  • 2023年6月   役割:参加   名称:【シス情FD】SBRC、QRECの活動ご紹介

    主催組織:部局

  • 2023年5月   役割:参加   名称:【シス情FD】農学研究院で進めているDX教育について

    主催組織:部局

  • 2023年4月   役割:参加   名称:令和5年度 第1回全学FD(新任教員の研修)The 1st All-University FD (training for new faculty members) in FY2023

    主催組織:全学

  • 2023年4月   役割:参加   名称:令和5年度 第1回全学FD(新任教員の研修)The 1st All-University FD (training for new faculty members) in FY2023

    主催組織:全学

▼全件表示

他大学・他機関等の客員・兼任・非常勤講師等

  • 2024年  慶應義塾大学情報理工学科  区分:客員教員  国内外の区分:国内 

    学期、曜日時限または期間:通年

  • 2023年  慶應義塾大学情報理工学科  区分:客員教員  国内外の区分:国内 

    学期、曜日時限または期間:通年

海外渡航歴

  • 2015年12月 - 2016年2月

    滞在国名1:オーストラリア連邦   滞在機関名1:University of South Australia

学内運営に関わる各種委員・役職等

  • 2023年4月 - 2024年5月   センター 数理・データサイエンス教育研究センター 協力教員