Multi-Modal Explainability Evaluation for Brain Tumor Segmentation: Metrics MSFI

Authors

  • Maria Nancy A. Research Scholar, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
  • K. Sathyarajasekaran Associate Professor, School of Computer Science and Engineering,, Vellore Institute of Technology, Chennai, Tamil Nadu, India

Keywords:

Explainable AI, Multi Modal Specific Feature Importance (MSFI), Modality Importance(MI)

Abstract

The significance of interpretability in artificial intelligence (AI) models is growing within the healthcare sector, driven by advancements in medical imaging technology. These developments enhance our ability to recognize and understand intricate biomedical occurrences. As medical imaging technology progresses, the need for interpretable AI models becomes more critical in ensuring trust, accountability, and acceptance among healthcare professionals. In this context, the Multi-Modal Specific Feature Importance (MSFI) metric emerges as a crucial tool for evaluating the effectiveness of eXplainable Artificial Intelligence (XAI) models, specifically Grad-CAM, in multi-modal medical imaging tasks. The MSFI metric addresses the intricacies of interpreting decisions made by AI models when presented with multi-modal medical images. Clear and detailed explanations are essential for ensuring a thorough comprehension and fostering trust in the decision-making process. This is particularly crucial as these visuals communicate diverse clinical information pertaining to the same underlying biomedical reality. The metric aims to assess how well heat-maps or feature attribution maps elucidate these decisions. The evaluation process using the MSFI metric is a comprehensive approach that combines computational methods with clinician user studies. For assessing the challenging brain tumor segmentation task clinically, the MSFI metric serves as a valuable tool. This metric gauges the correlation between the model prediction and the plausibility measure from various explainable artificial intelligence (XAI) approaches. In the selection and development of XAI algorithms tailored to meet clinical requirements for multi-modal explanation, the MSFI metric proves to be a valuable resource. By focusing on addressing the interpretability of modality-specific features, this metric provides a framework for refining and advancing XAI models in the realm of medical imaging. The MSFI measure offers a robust evaluation framework that aids in comprehending the performance of AI models in the intricate realm of multi-modal medical imaging, particularly in the context of brain tumor segmentation diagnosis.

Downloads

Download data is not yet available.

References

M. Förster, P. Hühn, M. Klier, and K. Kluge, 2021. Capturing users’ reality: A novel approach to generate coherent counterfactual explanations.

A.M. Antoniadi, Y. Du, Y. Guendouz, L. Wei, C. Mazo, B.A. Becker, and C. Mooney, 2021. Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Applied Sciences, 11(11), p.5088.

M. Ebers, 2020. Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework (s). An Overview of the Current Legal Framework (s)(August 9, 2021). Liane Colonna/Stanley Greenstein (eds.), Nordic Yearbook of Law and Informatics.

K. Fiok, F.V. Farahani, W. Karwowski, and T. Ahram, 2022. Explainable artificial intelligence for education and training. The Journal of Defense Modeling and Simulation, 19(2), pp.133-144.

A. Sheth, M. Gaur, K. Roy, and K. Faldu, 2021. Knowledge-intensive language understanding for explainable ai. IEEE Internet Computing, 25(5), pp.19-24.

G. Elkhawaga, O. Elzeki, M. Abuelkheir, and M. Reichert, 2023. Evaluating Explainable Artificial Intelligence Methods Based on Feature Elimination: A Functionality-Grounded Approach. Electronics, 12(7), p.1670.

G. Vilone, and L. Longo, 2021. Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion, 76, pp.89-106.

A.J. Johs, D.E. Agosto, and R.O. Weber, 2020. Qualitative investigation in explainable artificial intelligence: A bit more insight from social science. arXiv preprint arXiv:2011.07130.

W. Jin, X. Li, M. Fatehi, and G. Hamarneh, 2023. Guidelines and evaluation of clinical explainable AI in medical image analysis. Medical Image Analysis, 84, p.102684.

C. Patrício, J.C. Neves, and L.F. Teixeira, 2023. Explainable Deep Learning Methods in Medical Image Classification: A Survey. ACM Computing Surveys, 56(4), pp.1-41.

A. Chaddad, J. Peng, J. Xu, and A. Bouridane, 2023. Survey of explainable AI techniques in healthcare. Sensors, 23(2), p.634.

B.H. Van der Velden, H.J. Kuijf, K.G. Gilhuijs, and M.A. Viergever, 2022. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Medical Image Analysis, 79, p.102470.

D. Nie, J. Lu, H. Zhang, E. Adeli, J. Wang, Z. Yu, L. Liu, Q. Wang, J. Wu, and D. Shen, 2019. Multi-channel 3D deep feature learning for survival time prediction of brain tumor patients using multi-modal neuroimages. Scientific reports, 9(1), p.1103.

X. Hou, D. Yang, D. Li, M. Liu, Y. Zhou, and M. Shi, 2020. A new simple brain segmentation method for extracerebral intracranial tumors. PLoS One, 15(4), p.e0230754.

S. Arndt, C. Turvey, and N.C. Andreasen, 1999. Correlating and predicting psychiatric symptom ratings: Spearmans r versus Kendalls tau correlation. Journal of psychiatric research, 33(2), pp.97-104.

H.W. Loh, C.P. Ooi, S. Seoni, P.D. Barua, F. Molinari, and U.R. Acharya 2022. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022). Computer Methods and Programs in Biomedicine, p.107161.

P.N. Srinivasu, N. Sandhya, R.H. Jhaveri, and R. Raut, 2022. From blackbox to explainable AI in healthcare: existing tools and case studies. Mobile Information Systems, 2022, pp.1-20.

A. Adadi and M. Berrada, 2020. Explainable AI for healthcare: from black box to interpretable models. In Embedded Systems and Artificial Intelligence: Proceedings of ESAI 2019, Fez, Morocco (pp. 327-337). Springer Singapore.

Downloads

Published

23.02.2024

How to Cite

Nancy A., M. ., & Sathyarajasekaran, K. . (2024). Multi-Modal Explainability Evaluation for Brain Tumor Segmentation: Metrics MSFI. International Journal of Intelligent Systems and Applications in Engineering, 12(16s), 341–347. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/4830

Issue

Section

Research Article