文章摘要
NIU Jian-wei,AN Yue-qi,NI Jie,JIANG Chang-hua.[J].包装工程,2022,43(4):71-79.
Multimodal Emotion Recognition Based on Facial Expression and ECG Signal
  
DOI:10.19554/j.cnki.1001-3563.2022.04.008
中文关键词: 
英文关键词: multi-modal emotion recognition  facial expression  ECG signal  three-dimensional convolutional neural network
基金项目:This research is supported by the Open Funding Project of National Key Laboratory of Human Factors Engineering (Grant NO. 6142222190309). The authors acknowledged the kindness of MAHNOB-HCI, who provided the inducing materials for this study.
作者单位
NIU Jian-wei School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China 
AN Yue-qi School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China 
NI Jie School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China 
JIANG Chang-hua China Astronaut Research and Training Center Beijing, Beijing 404023, China 
摘要点击次数:
全文下载次数:
中文摘要:
      
英文摘要:
      As a key link in human-computer interaction, emotion recognition can enable robots to correctly perceive user emotions and provide dynamic and adjustable services according to the emotional needs of different users, which is the key to improve the cognitive level of robot service. Emotion recognition based on facial expression and electrocardiogram has numerous industrial applications. First, three-dimensional convolutional neural network deep learning architecture is utilized to extract the spatial and temporal features from facial expression video data and electrocardiogram (ECG) data, and emotion classification is carried out. Then two modalities are fused in the data level and the decision level, respectively, and the emotion recognition results are then given. Finally, the emotion recognition results of single-modality and multi-modality are compared and analyzed. Through the comparative analysis of the experimental results of single-modality and multi-modality under the two fusion methods, it is concluded that the accuracy rate of multi-modal emotion recognition is greatly improved compared with that of single-modal emotion recognition, and decision-level fusion is easier to operate and more effective than data-level fusion.
查看全文   查看/发表评论  下载PDF阅读器
关闭

关于我们 | 联系我们 | 投诉建议 | 隐私保护

您是第20349000位访问者    渝ICP备15012534号-4

版权所有:《包装工程》编辑部 2014 All Rights Reserved

邮编:400039 电话:023—68792836传真:023—68792396 Email: designartj@126.com

    

  
 

渝公网安备 50010702501717号