ECG data classification and explainability with machine learning and deep learning algorithms
Abstract
This thesis investigates the application of explainable artificial intelligence (XAI) algorithms for electrocardiogram (ECG) data classification, focusing on the performance of the enhanced ST-CNN-5 model compared to established models. Results indicate a slight improvement in accuracy, suggesting enhanced predictive capabilities. However, lower specificity and area-under-curve metrics underscore the need for further comprehensive evaluation. For explainability, Shapley Additive Explanations (SHAP), Gradient-weighted Class Activation Mapping (GradCAM), and Local Interpretable Model-agnostic Explanations (LIME) were utilized which reveals the superior ability of SHAP to highlight key ECG characteristics. The thesis also evaluates Support Vector Machine (SVM) and Random Forest (RF) algorithms, finding the enhanced ST-CNN-5 model outperforms in predictive accuracy. Minimal bias by sex is observed, emphasizing ongoing scrutiny of demographic variables. This research provides valuable insights into the effectiveness of algorithms and the interpretability of models, thereby aiding the development of AI-driven diagnostic tools in clinical settings.
In connection with the outcomes of this thesis, we have contributed a paper to the Intelligent Cross-Data Analysis and Retrieval conference, which was accepted for publication in April 2024. The appendix A section of the thesis contains a copy of the aforementioned paper for reference.
Accepted paper in Conference:• Jaya Ojha, Hårek Haugerud, Anis Yazidi and Pedro G. Lind. Exploring Interpretable AI Methods for ECG Data Classification, ICDAR ’24: The 5th ACM Workshop on Intelligent Cross-Data Analysis and Retrieval
Furthermore, we are currently in the process of preparing a scientific report basedon the findings of this research.