JAIST Repository >
b. 情報科学研究科・情報科学系 >
b11. 会議発表論文・発表資料等 >
b11-1. 会議発表論文・発表資料 >

このアイテムの引用には次の識別子を使用してください: http://hdl.handle.net/10119/16709

タイトル: Multimodal Feature Fusion for Human Personality Traits Classification
著者: Shen, Zhihao
Elibol, Armagan
Chong, Nak Young
発行日: 2020-06
出版者: Korea Robotics Society
誌名: Proceedings of the 2020 17th International Conference on Ubiquitous Robots (UR)
抄録: Similar to human-human social interaction, the process of inferring a user’s personality traits during human-robot interaction plays an important role. Robots need to be endowed with such capability in order to attract user engagement more. In this study, we present our on-going research on obtaining variable-length multimodal features and their fusion to enable social robots to infer human personality traits during face-to-face human-robot interaction. Multimodal nonverbal features, including head motion, face direction, body motion, voice pitch, voice energy, and Mel-frequency Cepstral Coefficient (MFCC), were extracted from videos and audios recorded during the interaction. The different combinations of multimodal features were verified, and their classification performance was compared.
Rights: Zhihao Shen, Armagan Elibol, Nak Young Chong, Multimodal Feature Fusion for Human Personality Traits Classification, Proceedings of the 2020 17th International Conference on Ubiquitous Robots (UR), Kyoto, Japan, June 22-26, Late Breaking Results Paper, 2020. This material is posted here with permission of Korea Robotics Society (KROS).
URI: http://hdl.handle.net/10119/16709
資料タイプ: author
出現コレクション:b11-1. 会議発表論文・発表資料 (Conference Papers)

このアイテムのファイル:

ファイル 記述 サイズ形式
C4_UR20_0190_FI.pdf102KbAdobe PDF見る/開く

当システムに保管されているアイテムはすべて著作権により保護されています。

 


お問い合わせ先 : 北陸先端科学技術大学院大学 研究推進課図書館情報係