Vis enkel innførsel

dc.contributor.authorHua, Yue
dc.contributor.authorZhong, Xiaolong
dc.contributor.authorZhang, Bingxue
dc.contributor.authorYin, Zhong
dc.contributor.authorZhang, Jianhua
dc.date.accessioned2022-04-19T08:55:39Z
dc.date.available2022-04-19T08:55:39Z
dc.date.created2021-12-03T14:22:28Z
dc.date.issued2021-10-23
dc.identifier.citationBrain Sciences. 2021, 11 (11), .en_US
dc.identifier.issn2076-3425
dc.identifier.urihttps://hdl.handle.net/11250/2991253
dc.description.abstractAffective computing systems can decode cortical activities to facilitate emotional human– computer interaction. However, personalities exist in neurophysiological responses among different users of the brain–computer interface leads to a difficulty for designing a generic emotion recognizer that is adaptable to a novel individual. It thus brings an obstacle to achieve cross-subject emotion recognition (ER). To tackle this issue, in this study we propose a novel feature selection method, manifold feature fusion and dynamical feature selection (MF-DFS), under transfer learning principle to determine generalizable features that are stably sensitive to emotional variations. The MF-DFS framework takes the advantages of local geometrical information feature selection, domain adaptation based manifold learning, and dynamical feature selection to enhance the accuracy of the ER system. Based on three public databases, DEAP, MAHNOB-HCI and SEED, the performance of the MF-DFS is validated according to the leave-one-subject-out paradigm under two types of electroencephalography features. By defining three emotional classes of each affective dimension, the accuracy of the MF-DFS-based ER classifier is achieved at 0.50–0.48 (DEAP) and 0.46–0.50 (MAHNOBHCI) for arousal and valence emotional dimensions, respectively. For the SEED database, it achieves 0.40 for the valence dimension. The corresponding accuracy is significantly superior to several classical feature selection methods on multiple machine learning models.en_US
dc.description.sponsorshipThis work is sponsored by the National Natural Science Foundation of China under Grant No. 61703277 and the Shanghai Sailing Program under Grant No. 17YF1427000.en_US
dc.language.isoengen_US
dc.publisherMDPIen_US
dc.relation.ispartofseriesBrain Sciences;Volume 11 / Issue 11
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.subjectEmotion recognitionen_US
dc.subjectElectroencephalographyen_US
dc.subjectMachine learningen_US
dc.subjectFeature selectionen_US
dc.subjectTransfer learningen_US
dc.titleManifold feature fusion with dynamical feature selection for cross-subject emotion recognitionen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© 2021 by the authors.en_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1
dc.identifier.doihttps://doi.org/10.3390/brainsci11111392
dc.identifier.cristin1964432
dc.source.journalBrain Sciencesen_US
dc.source.volume11en_US
dc.source.issue11en_US
dc.source.pagenumber1-24en_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal