Exploring Interpretable AI Methods for ECG Data Classification
Abstract
We address ECG data classification, using methods from explainable
artificial intelligence (XAI). In particular, we focus on the extended
performance of the ST-CNN-5 model compared to established mod-
els. The model showcases slight improvement in accuracy sug-
gesting the potential of this new model to provide more reliable
predictions compared to other models. However, lower values of
the specificity and area-under-curve metrics highlight the need to
thoroughly evaluate the strengths and weaknesses of the extended
model compared to other models. For the interpretability analysis,
we use Shapley Additive Explanations (SHAP), Gradient-weighted
Class Activation Mapping (GradCAM), and Local Interpretable
Model-agnostic Explanations (LIME) methods. In particular, we
show that the new model exhibits improved explainability in its
GradCAM explanations compared to the former model. SHAP effec-
tively highlights crucial ECG features, better than GradCAM and
LIME. The latter methods exhibit inferior performance, particularly
in capturing nuanced patterns associated with certain cardiac condi-
tions. By using distinctive methods in the interpretability analysis,
we provide a systematic discussion about which ECG features are
better - or worse - uncovered by each method.