当前位置: 首页 > 详情页

A radial basis deformable residual convolutional neural model embedded with local multi-modal feature knowledge and its application in cross-subject classification

文献详情

资源类型:
WOS体系:

收录情况: ◇ SCIE

机构: [1]Yanshan Univ, Sch Informat Sci & Engn, Qinhuangdao 066004, Hebei, Peoples R China [2]Hebei Normal Univ Sci & Technol, Sch Math & Informat Sci & Technol, Qinhuangdao 066004, Hebei, Peoples R China [3]Univ Calif San Diego, Swartz Ctr Computat Neurosci, La Jolla, CA 92093 USA [4]Univ Sci & Technol Beijing, Sch Intelligence Sci & Technol, Beijing 100083, Peoples R China [5]Univ Sci & Technol Beijing, Key Lab Percept & Control Intelligent Bion Unmanne, Minist Educ, Beijing 100083, Peoples R China [6]Univ Sci & Technol Beijing, Sports Dept, Beijing 100083, Peoples R China [7]Capital Med Univ, Dept Neurol, Xuanwu Hosp, Beijing 100053, Peoples R China [8]Chengde Med Univ, Dept Biomed Engn, Chengde 067000, Hebei, Peoples R China
出处:
ISSN:

关键词: Radial basis deformable residual convolutional neural networks Embedded feature knowledge Cross-subject Brain regions correlation

摘要:
In the spatial cognition and emotion recognition tasks based on electroencephalography (EEG), the signal information representation of the single modal is incomplete because of the significant inter-subject differences in the EEG signals, resulting in low generalization performance of classification models. To address this issue, we propose a radial basis deformable residual convolutional neural networks model embedded with local multimodal feature knowledge (RBDRCNN-LMFK). The RBDRCNN leverages Euclidean distance alignment, deformable convolution, depth-separable convolution, and residual connection modules for effective EEG feature extraction. The embedded local feature knowledge algorithm enables the effective combination of multi-modal information. To further validate the effectiveness of the algorithm, we used the left-one-subject-out cross-validation algorithm on the virtual city walking (VCW), SJTU Emotion EEG Dataset IV (SEED IV), and Building block gesture recognition (BBGR) datasets for spatial cognition and emotion recognition tasks. The average accuracy of the VCW was 97.84 % using the kNN classifier, surpassing the Concate fusion method's 96.62 %. The average accuracy for the SEED IV dataset was 87.56 %, higher than the Concate Fusion's 69.97 %. On the BBGR dataset, the kNN classifier achieved an average accuracy of 87.80 %, compared to 85.31 % with the Concate fusion. The results show that the model enhances the recognition of biologically significant features in EEG signals by embedding local features and increases the correlation between different brain regions associated with the task. This work illustrates a promising direction of using deep learning models to discover effective task-related features from highly diverse EEG signals and enhance brain regions' correlation through multi-modal knowledge embedding.

基金:
语种:
WOS:
中科院(CAS)分区:
出版当年[2023]版:
大类 | 1 区 计算机科学
小类 | 2 区 计算机:人工智能 2 区 工程:电子与电气 2 区 运筹学与管理科学
最新[2023]版:
大类 | 1 区 计算机科学
小类 | 2 区 计算机:人工智能 2 区 工程:电子与电气 2 区 运筹学与管理科学
JCR分区:
出版当年[2022]版:
Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Q1 OPERATIONS RESEARCH & MANAGEMENT SCIENCE
最新[2023]版:
Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Q1 OPERATIONS RESEARCH & MANAGEMENT SCIENCE

影响因子: 最新[2023版] 最新五年平均 出版当年[2022版] 出版当年五年平均 出版前一年[2021版] 出版后一年[2023版]

第一作者:
第一作者机构: [1]Yanshan Univ, Sch Informat Sci & Engn, Qinhuangdao 066004, Hebei, Peoples R China
通讯作者:
通讯机构: [4]Univ Sci & Technol Beijing, Sch Intelligence Sci & Technol, Beijing 100083, Peoples R China [5]Univ Sci & Technol Beijing, Key Lab Percept & Control Intelligent Bion Unmanne, Minist Educ, Beijing 100083, Peoples R China
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:16409 今日访问量:0 总访问量:869 更新日期:2025-01-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 首都医科大学宣武医院 技术支持:重庆聚合科技有限公司 地址:北京市西城区长椿街45号宣武医院