当前位置: 首页 > 详情页

An improved multi-input deep convolutional neural network for automatic emotion recognition

文献详情

资源类型:
WOS体系:
Pubmed体系:

收录情况: ◇ SCIE

机构: [1]Key Laboratory of Complex System Control Theory and Application, Tianjin University of Technology, Tianjin, China. [2]School of Computer and Communication Engineering, University of Science and Technology, Beijing, China. [3]Department of Computer and Network Engineering, College of Information Technology, United Arab Emirates University (UAEU), Al Ain, United Arab Emirates. [4]National Engineering Laboratory for Risk Perception and Prevention, Beijing, China. [5]Beijing Key Laboratory of Mental Disorders, Beijing Anding Hospital, Capital Medical University, Beijing, China. [6]Beijing Machine and Equipment Institute, Beijing, China. [7]Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China. [8]Department of Rehabilitation, Tianjin Medical University General Hospital, Tianjin, China. [9]Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China.
出处:
ISSN:

关键词: biological signals multi-modality emotion recognition convolutional neural network machine learning

摘要:
Current decoding algorithms based on a one-dimensional (1D) convolutional neural network (CNN) have shown effectiveness in the automatic recognition of emotional tasks using physiological signals. However, these recognition models usually take a single modal of physiological signal as input, and the inter-correlates between different modalities of physiological signals are completely ignored, which could be an important source of information for emotion recognition. Therefore, a complete end-to-end multi-input deep convolutional neural network (MI-DCNN) structure was designed in this study. The newly designed 1D-CNN structure can take full advantage of multi-modal physiological signals and automatically complete the process from feature extraction to emotion classification simultaneously. To evaluate the effectiveness of the proposed model, we designed an emotion elicitation experiment and collected a total of 52 participants' physiological signals including electrocardiography (ECG), electrodermal activity (EDA), and respiratory activity (RSP) while watching emotion elicitation videos. Subsequently, traditional machine learning methods were applied as baseline comparisons; for arousal, the baseline accuracy and f1-score of our dataset were 62.9 ± 0.9% and 0.628 ± 0.01, respectively; for valence, the baseline accuracy and f1-score of our dataset were 60.3 ± 0.8% and 0.600 ± 0.01, respectively. Differences between the MI-DCNN and single-input DCNN were also compared, and the proposed method was verified on two public datasets (DEAP and DREAMER) as well as our dataset. The computing results in our dataset showed a significant improvement in both tasks compared to traditional machine learning methods (t-test, arousal: p = 9.7E-03 < 0.01, valence: 6.5E-03 < 0.01), which demonstrated the strength of introducing a multi-input convolutional neural network for emotion recognition based on multi-modal physiological signals.Copyright © 2022 Chen, Zou, Belkacem, Lyu, Zhao, Yi, Huang, Liang and Chen.

基金:
语种:
被引次数:
WOS:
PubmedID:
中科院(CAS)分区:
出版当年[2021]版:
大类 | 3 区 医学
小类 | 3 区 神经科学
最新[2023]版:
大类 | 3 区 医学
小类 | 3 区 神经科学
JCR分区:
出版当年[2020]版:
Q2 NEUROSCIENCES
最新[2023]版:
Q2 NEUROSCIENCES

影响因子: 最新[2023版] 最新五年平均 出版当年[2020版] 出版当年五年平均 出版前一年[2019版] 出版后一年[2021版]

第一作者:
第一作者机构: [1]Key Laboratory of Complex System Control Theory and Application, Tianjin University of Technology, Tianjin, China.
通讯作者:
通讯机构: [1]Key Laboratory of Complex System Control Theory and Application, Tianjin University of Technology, Tianjin, China. [9]Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China.
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:16409 今日访问量:0 总访问量:869 更新日期:2025-01-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 首都医科大学宣武医院 技术支持:重庆聚合科技有限公司 地址:北京市西城区长椿街45号宣武医院