JAIST Repository >
School of Information Science >
Conference Papers >
Conference Papers >
Please use this identifier to cite or link to this item:
http://hdl.handle.net/10119/16709
|
Title: | Multimodal Feature Fusion for Human Personality Traits Classification |
Authors: | Shen, Zhihao Elibol, Armagan Chong, Nak Young |
Issue Date: | 2020-06 |
Publisher: | Korea Robotics Society |
Magazine name: | Proceedings of the 2020 17th International Conference on Ubiquitous Robots (UR) |
Abstract: | Similar to human-human social interaction, the process of inferring a user’s personality traits during human-robot interaction plays an important role. Robots need to be endowed with such capability in order to attract user engagement more. In this study, we present our on-going research on obtaining variable-length multimodal features and their fusion to enable social robots to infer human personality traits during face-to-face human-robot interaction. Multimodal nonverbal features, including head motion, face direction, body motion, voice pitch, voice energy, and Mel-frequency Cepstral Coefficient (MFCC), were extracted from videos and audios recorded during the interaction. The different combinations of multimodal features were verified, and their classification performance was compared. |
Rights: | Zhihao Shen, Armagan Elibol, Nak Young Chong, Multimodal Feature Fusion for Human Personality Traits Classification, Proceedings of the 2020 17th International Conference on Ubiquitous Robots (UR), Kyoto, Japan, June 22-26, Late Breaking Results Paper, 2020. This material is posted here with permission of Korea Robotics Society (KROS). |
URI: | http://hdl.handle.net/10119/16709 |
Material Type: | author |
Appears in Collections: | b11-1. 会議発表論文・発表資料 (Conference Papers)
|
Files in This Item:
File |
Description |
Size | Format |
C4_UR20_0190_FI.pdf | | 102Kb | Adobe PDF | View/Open |
|
All items in DSpace are protected by copyright, with all rights reserved.
|