dc.contributor.author | Mohan, Karnati | |
dc.contributor.author | Seal, Ayan | |
dc.contributor.author | Krejcar, Ondrej | |
dc.contributor.author | Yazidi, Anis | |
dc.date.accessioned | 2021-02-01T22:16:18Z | |
dc.date.accessioned | 2021-03-11T11:22:49Z | |
dc.date.available | 2021-02-01T22:16:18Z | |
dc.date.available | 2021-03-11T11:22:49Z | |
dc.date.issued | 2020-10-16 | |
dc.identifier.citation | Mohan, Seal, Krejcar, Yazidi. Facial Expression Recognition Using Local Gravitational Force Descriptor-Based Deep Convolution Neural Networks. IEEE Transactions on Instrumentation and Measurement. 2020 | en |
dc.identifier.issn | 0018-9456 | |
dc.identifier.issn | 1557-9662 | |
dc.identifier.uri | https://hdl.handle.net/10642/10003 | |
dc.description.abstract | An image is worth a thousand words; hence, a face image illustrates extensive details about the specification, gender, age, and emotional states of mind. Facial expressions play an important role in community-based interactions and are often used in the behavioral analysis of emotions. Recognition of automatic facial expressions from a facial image is a challenging task in the computer vision community and admits a large set of applications, such as driver safety, human–computer interactions, health care, behavioral science, video conferencing, cognitive science, and others. In this work, a deep-learning-based scheme is proposed for identifying the facial expression of a person. The proposed method consists of two parts. The former one finds out local features from face images using a local gravitational force descriptor, while, in the latter part, the descriptor is fed into a novel deep convolution neural network (DCNN) model. The proposed DCNN has two branches. The first branch explores geo-metric features, such as edges, curves, and lines, whereas holistic features are extracted by the second branch. Finally, the score-level fusion technique is adopted to compute the final classifica-tion score. The proposed method along with 25 state-of-the-art methods is implemented on five benchmark available databases, namely, Facial Expression Recognition 2013, Japanese Female Facial Expressions, Extended CohnKanade, Karolinska Directed Emotional Faces, and Real-world Affective Faces. The data-bases consist of seven basic emotions: neutral, happiness, anger, sadness, fear, disgust, and surprise. The proposed method is compared with existing approaches using four evaluation metrics, namely, accuracy, precision, recall, and f1-score. The obtained results demonstrate that the proposed method outperforms all state-of-the-art methods on all the databases. | en |
dc.description.sponsorship | This work was supported in part by the project “Prediction of Diseases Through Computer Assisted Diagnosis System Using Images Captured by Minimally Invasive and Noninvasive Modalities,” Computer Science and Engineering, PDPM Indian Institute of Information Technology, Design and Manufacturing Jabalpur, Jabalpur, India, under Grant ID: SPARC-MHRD-231, in part by the project IT4Neuro(degeneration) under Grant CZ.02.1.01/0.0/0.0/18 069/0010054, in part by the project “Smart Solutions in Ubiquitous Computing Environments,” Grant Agency of Excellence, University of Hradec Kralove, Faculty of Informatics and Management, Czech Republic under Grant ID: UHK-FIM-GE-2020, in part by the project at Universiti Teknologi Malaysia (UTM) under Research University Grant Vot-20H04, in part by the Malaysia Research University Network (MRUN) under Grant Vot 4L876, and in part by the Fundamental Research Grant Scheme (FRGS) under the Ministry of Education Malaysia under Grant Vot5F073. | en |
dc.language.iso | en | en |
dc.publisher | Institute of Electrical and Electronics Engineers | en |
dc.relation.ispartofseries | IEEE Transactions on Instrumentation and Measurement;Volume 70 | |
dc.subject | Deep convolution neural networks | en |
dc.subject | Facial expression recognition | en |
dc.subject | Local gravitational forces | en |
dc.subject | Descriptors | en |
dc.subject | Score-level fusions | en |
dc.subject | Softmax classifications | en |
dc.title | Facial Expression Recognition Using Local Gravitational Force Descriptor-Based Deep Convolution Neural Networks | en |
dc.type | Journal article | en |
dc.type | Peer reviewed | en |
dc.date.updated | 2021-02-01T22:16:17Z | |
dc.description.version | publishedVersion | en |
dc.identifier.doi | https://doi.org/10.1109/TIM.2020.3031835 | |
dc.identifier.cristin | 1885547 | |
dc.source.journal | IEEE Transactions on Instrumentation and Measurement | |