Vis enkel innførsel

dc.contributor.authorZhang, Jianhua
dc.contributor.authorYin, Zhong
dc.contributor.authorChen, Peng
dc.contributor.authorNichele, Stefano
dc.date.accessioned2022-01-20T12:49:10Z
dc.date.available2022-01-20T12:49:10Z
dc.date.created2020-10-12T15:31:36Z
dc.date.issued2020-01-31
dc.identifier.citationInformation Fusion. 2020, 59 103-126.en_US
dc.identifier.issn1566-2535
dc.identifier.urihttps://hdl.handle.net/11250/2838485
dc.description.abstractIn recent years, the rapid advances in machine learning (ML) and information fusion has made it possible to endow machines/computers with the ability of emotion understanding, recognition, and analysis. Emotion recognition has attracted increasingly intense interest from researchers from diverse fields. Human emotions can be recognized from facial expressions, speech, behavior (gesture/posture) or physiological signals. However, the first three methods can be ineffective since humans may involuntarily or deliberately conceal their real emotions (so-called social masking). The use of physiological signals can lead to more objective and reliable emotion recognition. Compared with peripheral neurophysiological signals, electroencephalogram (EEG) signals respond to fluctuations of affective states more sensitively and in real time and thus can provide useful features of emotional states. Therefore, various EEG-based emotion recognition techniques have been developed recently. In this paper, the emotion recognition methods based on multi-channel EEG signals as well as multi-modal physiological signals are reviewed. According to the standard pipeline for emotion recognition, we review different feature extraction (e.g., wavelet transform and nonlinear dynamics), feature reduction, and ML classifier design methods (e.g., k-nearest neighbor (KNN), naive Bayesian (NB), support vector machine (SVM) and random forest (RF)). Furthermore, the EEG rhythms that are highly correlated with emotions are analyzed and the correlation between different brain areas and emotions is discussed. Finally, we compare different ML and deep learning algorithms for emotion recognition and suggest several open problems and future research directions in this exciting and fast-growing area of AI.en_US
dc.description.sponsorshipThis work was supported in part by OsloMet Faculty TKD Lighthouse Project [grant no. 201369-100]. Z. Yin's work was funded by the National Natural Science Foundation of China [grant no. 61703277] and the Shanghai Sailing Program [grant no. 17YF1427000].en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.relation.ispartofseriesInformation Fusion;Volume 59, July 2020
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.no*
dc.subjectEmotion recognitionen_US
dc.subjectAffective computingen_US
dc.subjectPhysiological signalsen_US
dc.subjectFeature dimensionality reductionsen_US
dc.subjectData fusionen_US
dc.subjectMachine learningen_US
dc.subjectDeep learningen_US
dc.titleEmotion recognition using multi-modal data and machine learning techniques: A tutorial and reviewen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionacceptedVersionen_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.fulltextpostprint
cristin.qualitycode1
dc.identifier.doihttps://doi.org/10.1016/j.inffus.2020.01.011
dc.identifier.cristin1838924
dc.source.journalInformation Fusionen_US
dc.source.volume59en_US
dc.source.pagenumber103-126en_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal