|
JAIST Repository >
School of Information Science >
Articles >
Journal Articles >
Please use this identifier to cite or link to this item:
http://hdl.handle.net/10119/18471
|
Title: | Multi-modal Feature Fusion for Better Understanding of Human Personality Traits in Social Human-Robot Interaction |
Authors: | Shen, Zhihao Elibol, Armagan Chong, Nak Young |
Keywords: | human-robot interaction human personality traits multi-modal feature fusion machine learning |
Issue Date: | 2021-08-17 |
Publisher: | Elsevier |
Magazine name: | Robotics and Autonomous Systems |
Volume: | 146 |
Start page: | 103874 |
DOI: | 10.1016/j.robot.2021.103874 |
Abstract: | Since the dynamic nature of human-robot interaction becomes increasingly prevalent in our daily life, there is a great demand for enabling the robot to better understand human personality traits and inspiring humans to be more engaged in the interaction with the robot. Therefore, in this work, as we design the paradigm of human-robot interaction as close to the real situation as possible, the following three main problems are addressed: (1) fusion of visual and audio features of human interaction modalities, (2) integration of variable length feature vectors, and (3) compensation of shaky camera motion caused by movements of the robot’s communicative gesture. Specifically, the three most important visual features of humans including head motion, gaze, and body motion were extracted from a camera mounted on the robot performing verbal and body gestures during the interaction. Then, our system was geared to fuse the aforementioned visual features and different types of vocal features, such as voice pitch, voice energy, and Mel-Frequency Cepstral Coefficient, dealing with variable length multiple feature vectors. Lastly, considering unknown patterns and sequential characteristics of human communicative behavior, we proposed a multi-layer Hidden Markov Model that improved the classification accuracy of personality traits and offered notable advantages of fusing the multiple features. The results were thoroughly analyzed and supported by psychological studies. The proposed multi-modal fusion approach is expected to deepen the communicative competence of social robots interacting with humans from different cultures and backgrounds. |
Rights: | Copyright (C)2021, Elsevier. Licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license (CC BY-NC-ND 4.0). [http://creativecommons.org/licenses/by-nc-nd/4.0/] NOTICE: This is the author's version of a work accepted for publication by Elsevier. Shen Zhihao, Armagan Elibol, Nak Young Chong, Robotics and Autonomous Systems 146, 2021, 103874, https://doi.org/10.1016/j.robot.2021.103874 |
URI: | http://hdl.handle.net/10119/18471 |
Material Type: | author |
Appears in Collections: | b10-1. 雑誌掲載論文 (Journal Articles)
|
Files in This Item:
File |
Description |
Size | Format |
N-CHONG-I-0827.pdf | | 972Kb | Adobe PDF | View/Open |
|
All items in DSpace are protected by copyright, with all rights reserved.
|