📝 Original Info
- Title: Bridging Efficiency and Safety: Formal Verification of Neural Networks with Early Exits
- ArXiv ID: 2512.20755
- Date: 2025-12-23
- Authors: Yizhak Yisrael Elboher, Avraham Raviv, Amihay Elboher, Zhouxing Shi, Omri Azencot, Hillel Kugler, Guy Katz
📝 Abstract
Ensuring the safety and efficiency of AI systems is a central goal of modern research. Formal verification provides guarantees of neural network robustness, while early exits improve inference efficiency by enabling intermediate predictions. Yet verifying networks with early exits introduces new challenges due to their conditional execution paths. In this work, we define a robustness property tailored to early exit architectures and show how off-the-shelf solvers can be used to assess it. We present a baseline algorithm, enhanced with an early stopping strategy and heuristic optimizations that maintain soundness and completeness. Experiments on multiple benchmarks validate our framework's effectiveness and demonstrate the performance gains of the improved algorithm. Alongside the natural inference acceleration provided by early exits, we show that they also enhance verifiability, enabling more queries to be solved in less time compared to standard networks. Together with a robustness analysis, we show how these metrics can help users navigate the inherent trade-off between accuracy and efficiency.
💡 Deep Analysis
📄 Full Content
Bridging Efficiency and Safety: Formal Verification of
Neural Networks with Early Exits
Yizhak Yisrael Elboher1⋆, Avraham Raviv2*, Amihay Elboher3*, Zhouxing Shi4, Omri
Azencot3, Hillel Kugler2, and Guy Katz1
1 The Hebrew University of Jerusalem, Israel
yizhak.elboher@mail.huji.ac.il, guy.katz@cs.huji.ac.il
2 Bar Ilan University, Israel
avraham.raviv@biu.ac.il, hillel.kugler@biu.ac.il
3 Ben-Gurion University of the Negev, Israel
amihay@bgu.ac.il, omria@bgu.ac.il
4 University of California, Riverside, USA
zhouxing.shi@ucr.edu
Abstract. Ensuring the safety and efficiency of AI systems is a central goal
of modern research. Formal verification provides guarantees of neural network
robustness, while early exits improve inference efficiency by enabling intermediate
predictions. Yet verifying networks with early exits introduces new challenges due
to their conditional execution paths. In this work, we define a robustness property
tailored to early exit architectures and show how off-the-shelf solvers can be used
to assess it. We present a baseline algorithm, enhanced with an early stopping
strategy and heuristic optimizations that maintain soundness and completeness.
Experiments on multiple benchmarks validate our framework’s effectiveness and
demonstrate the performance gains of the improved algorithm. Alongside the
natural inference acceleration provided by early exits, we show that they also
enhance verifiability, enabling more queries to be solved in less time compared to
standard networks. Together with a robustness analysis, we show how these metrics
can help users navigate the inherent trade-off between accuracy and efficiency.
1
Introduction
Deep Neural Networks (DNNs) are increasingly deployed in critical domains such as
virtual assistants [1] and medical diagnostics [2], making their reliability essential. Yet,
they are vulnerable to adversarial perturbations: small input modifications that can cause
incorrect predictions [3]. This vulnerability has driven extensive research on adversarial
attacks and defenses [4], highlighting the need for robust and trustworthy AI systems.
Formal verification has emerged as an effective approach for ensuring DNN correct-
ness with respect to specified properties [5, 6, 7, 8]. It rigorously analyzes a network’s
behavior to guarantee compliance with critical requirements across all possible inputs
within a defined domain [9]. By providing mathematical guarantees for properties like
robustness and safety, it offers a valuable tool for building reliable AI systems and
supports adoption in high-stakes domains where reliability is crucial [10, 11].
⋆Equal contribution.
arXiv:2512.20755v1 [cs.LG] 23 Dec 2025
2
Y. Y. Elboher, A. Raviv, A. Elboher, Z. Shi, O. Azencot, H. Kugler, G. Katz
In addition to robustness and safety issues, another limitation of DNNs lies in their
high computational cost, which makes both training and inference power consuming [12,
13, 14] and limits their use in low-resource systems [15, 16, 17]. Even for relatively
simple inputs, the inference process of a DNN can be unnecessarily complex and time-
consuming. A promising avenue for addressing this computational burden is the use
of dynamic inference techniques, such as early exit (EE) [18, 19]. EE mechanisms
allow a network to terminate computation prematurely once a sufficiently confident
prediction is reached at an intermediate stage, thereby reducing computational overhead
without compromising accuracy. EE has been adopted in a wide range of domains,
including natural language processing [12], vision [13], and speech recognition [14],
and is increasingly recognized as a powerful tool for optimizing DNN performance in
resource-constrained environments [15, 20, 21, 22, 23].
Although EE strategies have demonstrated their potential to enhance runtime effi-
ciency, their implications for formal verification remain largely unexplored. The architec-
tural modification of adding intermediate exits introduces two key challenges. First, the
execution flow can vary, posing technical difficulties for classical verification techniques
that assume a fixed output layer. Second, the verification of conditional decision logic
must be adapted accordingly.
We address this gap by introducing the formal verification of DNNs with EEs. Our
focus is on local robustness, a property that ensures the network’s predictions remain
consistent within a small neighborhood around a given input. To this end, we propose
an algorithm tailored to verify local robustness in DNNs with early exits, enhanced
with heuristics that effectively reuse partial results to minimize redundancy and improve
scalability. These advances provide a robust framework for verifying DNNs with EEs,
contributing to both their reliability and their practical usability in real-world applications.
We further leverage our algorithm to enable early verification of standard networks by
augmenting them with early exits.
In this work, we contribute to the formal ver
Reference
This content is AI-processed based on open access ArXiv data.