XMask Clustering: Leveraging eXplainable AI and Clustering for Medical Knowledge Discovery
MetadataVis full innførsel
Deep Learning, a subset of machine learning, has shown great ability in supervised medical image classification tasks. Although there are significant advantages, DL models have low interpretability and are considered black-boxes. The black-box nature of these models affects trust and hinders adoption in critical domains. The field of eXplainable AI aims to address these problems by creating human-centered explanations that give insight into a model and its predictions. This thesis answers whether the aggregation of explanations extracted from black-box models can be leveraged for medical knowledge discovery. This is done by exploring the use of explanations not only to explain the model's predictions themselves but also as a tool to reveal previously unknown properties of the data. This is done in the context of medical imaging for the purpose of extracting new medical knowledge. A novel methodology is proposed for this purpose which we call eXplanation-masked clustering (XMask Clustering). With this methodology, explanations extracted from black-box classifiers are used as masks, revealing only the areas that contributed to a prediction. This gives insight into the model's learned knowledge. Further, the masked images are clustered to uncover subclasses existing within the labeled class. Experiments with the proposed methodology resulted in explanations that accurately locate real and pseudo-real pathological identifiers. Experiments also show that XMask Clustering results in higher-quality clusters when using a combination of real and pseudo-real gastrointestinal images.