Title: An Explainable Agentic AI Framework for Uncertainty-Aware and Abstention-Enabled Acute Ischemic Stroke Imaging Decisions
ArXiv ID: 2601.01008
Date: 2026-01-03
Authors: Md Rashadul Islam
📝 Abstract
Artificial intelligence (AI) models have demonstrated considerable potential in the imaging of acute ischemic stroke, especially in the detection and segmentation of lesions via computed tomography (CT) and magnetic resonance imaging (MRI). Nevertheless, the majority of existing approaches operate as black-box predictors, providing deterministic outputs without transparency regarding predictive uncertainty or the establishment of explicit protocols for decision rejection when predictions are ambiguous. This deficiency presents considerable safety and trust issues within the context of high-stakes emergency radiology, where inaccuracies in automated decision-making could conceivably lead to negative consequences in clinical settings. [1], [2]. In this paper, we introduce an explainable agentic AI framework targeted at uncertainty-aware and abstention-based decision-making in AIS imaging. It is based on a multistage agentic pipeline. In this framework, a perception agent performs lesion-aware image analysis, an uncertainty estimation agent estimates the predictive confidence at the slice level and a decision agent dynamically decides whether to make or withhold the prediction based on prescribed uncertainty thresholds. This approach is different from previous stroke imaging frameworks, which have primarily aimed to improve the accuracy of segmentation or classification. [3], [4], Our framework explicitly emphasizes clinical safety, transparency, and decision-making processes that are congruent with human values. We validate the practicality and interpretability of our framework through qualitative and case-based examinations of typical stroke imaging scenarios. This examination demonstrates a natural correlation between uncertainty-driven abstention and the existence of lesions, fluctuations in image quality, and the specific anatomical definition being analyzed. Furthermore, the system integrates an explanation mode, offering visual and structural justifications to bolster decision-making, thereby addressing a crucial limitation observed in existing uncertainty-aware medical imaging systems: the absence of actionable interpretability. [5], [6]. This research does not claim to establish a highperformance benchmark; instead, it presents agentic control, uncertainty-awareness, and selective abstention as essential design principles for the creation of safe and reliable MI-AI. Our results support the idea that incorporating explicit stalling behavior within agentic architectures could accelerate the development of clinically deployable AI systems for acute stroke intervention.
💡 Deep Analysis
📄 Full Content
An Explainable Agentic AI Framework for
Uncertainty-Aware and Abstention-Enabled Acute
Ischemic Stroke Imaging Decisions
1st Md Rashadul Islam
Department of Computer Science and Engineering
Daffodil International University
islam15-6062@s.diu.edu.bd
Dhaka, Bangladesh
Abstract—Artificial intelligence (AI) models have
demonstrated considerable potential in the imaging
of acute ischemic stroke, especially in the detection
and segmentation of lesions via computed tomography
(CT) and magnetic resonance imaging (MRI). Never-
theless, the majority of existing approaches operate as
black-box predictors, providing deterministic outputs
without transparency regarding predictive uncertainty
or the establishment of explicit protocols for decision
rejection when predictions are ambiguous. This de-
ficiency presents considerable safety and trust issues
within the context of high-stakes emergency radiology,
where inaccuracies in automated decision-making could
conceivably lead to negative consequences in clinical
settings. [1], [2].
In this paper, we introduce an explainable agen-
tic AI framework targeted at uncertainty-aware and
abstention-based decision-making in AIS imaging. It is
based on a multistage agentic pipeline.
In this framework, a perception agent performs
lesion-aware image analysis, an uncertainty estima-
tion agent estimates the predictive confidence at the
slice level and a decision agent dynamically decides
whether to make or withhold the prediction based
on prescribed uncertainty thresholds. This approach
is different from previous stroke imaging frameworks,
which have primarily aimed to improve the accuracy of
segmentation or classification. [3], [4], Our framework
explicitly emphasizes clinical safety, transparency, and
decision-making processes that are congruent with hu-
man values.
We validate the practicality and interpretability of
our framework through qualitative and case-based ex-
aminations of typical stroke imaging scenarios. This
examination demonstrates a natural correlation be-
tween uncertainty-driven abstention and the existence
of lesions, fluctuations in image quality, and the specific
anatomical definition being analyzed. Furthermore, the
system integrates an explanation mode, offering visual
and structural justifications to bolster decision-making,
thereby addressing a crucial limitation observed in
existing uncertainty-aware medical imaging systems:
the absence of actionable interpretability. [5], [6].
This research does not claim to establish a high-
performance benchmark; instead, it presents agentic
control, uncertainty-awareness, and selective absten-
tion as essential design principles for the creation of
safe and reliable MI-AI. Our results support the idea
that incorporating explicit stalling behavior within
agentic architectures could accelerate the development
of clinically deployable AI systems for acute stroke
intervention.
Keywords: Acute Ischemic Stroke; Medical Imaging AI;
Agentic Artificial Intelligence; Uncertainty Estimation;
Abstention Mechanisms; Explainable AI; Clinical Decision
Support; Safety-Critical AI
I. Introduction
Acute ischemic stroke continues to be one of the leading
causes of mortality and long-term disability worldwide,
placing a significant burden on patients, caregivers, and
healthcare systems. Accurate and timely interpretation
of medical imaging, particularly computed tomography
(CT), CT angiography (CTA), and magnetic resonance
imaging (MRI).Artificial intelligence (AI) has emerged as
a promising tool to support radiological assessment by
enabling automated lesion detection, segmentation, and
triage from stroke imaging data [3], [4].
While deep learning has advanced significantly in the
domain of medical imaging, the majority of current AI
systems for stroke diagnosis function as deterministic pre-
dictors, yielding consistent results without accounting for
variables such as image quality, lesion ambiguity, or shifts
in data distribution. This approach contrasts sharply with
the practical realm of clinical radiology, where experienced
radiologists often delay decision-making until additional
imaging is obtained or process complex cases through
multiple stages to accommodate substantial diagnostic
uncertainty. In the absence of explicit mechanisms to iden-
tify images with uncertainty and abstain from decision-
making, issues of safety and trust become increasingly
critical, particularly in emergency situations where erro-
neous automated decisions could have profound clinical
consequences. [1], [2].
Recent work has emphasized the importance of uncer-
tainty quantification in medical AI and the necessity of
distinguishing reliable from non-reliable predictions [7]–
[9]. In medical imaging, uncertainty-aware approaches
arXiv:2601.01008v1 [eess.IV] 3 Jan 2026
such as Bayesian neural networks and deep ensembles
(DE) have been studied. However, they usually assume
that the results can only be expressed as a numerical
confidence level and would no