An Explainable Deep Learning Framework for Medical Image Diagnosis Using Attention Mechanisms

Authors

  • Haoran Zhang School of Astronautics, Northwestern Polytechnical University, Xi’an
  • Jun Wang School of Artificial Intelligence, Beijing Institute of Technology, Beijing

DOI:

https://doi.org/10.9999/ijair.v1i1.6

Keywords:

medical imaging; explainable AI; attention; weakly supervised localization; uncer- tainty.

Abstract

Attention mechanisms are widely used to improve the performance of deep neural networks and to provide spatial cues that are often interpreted as explanations. In medical image diag- nosis, however, reliable explanations require more than visually appealing heatmaps: they must be stable under perturbations, aligned with clinically meaningful regions, and accompanied by uncertainty-aware decision outputs.
This paper presents an explainable deep learning framework for medical image diagnosis that integrates (i) an attention-based diagnostic backbone, (ii) multi-scale attention aggregation for lesion localization, (iii) calibration and uncertainty reporting for risk-aware triage, and (iv) a set of quantitative explainability checks that go beyond qualitative visualization.
The framework is designed as a practical template that can be instantiated for common diagnostic tasks (classification, weakly supervised localization, and segmentation-assisted clas- sification). We describe the modeling choices, training objectives, evaluation protocol, and ablation studies, and we discuss failure modes and deployment considerations.

Downloads

Published

2026-01-30 — Updated on 2026-01-30

Versions

How to Cite

Zhang, H., & Wang, J. (2026). An Explainable Deep Learning Framework for Medical Image Diagnosis Using Attention Mechanisms. International Journal of Artificial Intelligence Research, 1(1). https://doi.org/10.9999/ijair.v1i1.6