Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse
Advanced AI systems offer substantial benefits but also introduce risks. In 2025, AI-enabled cyber offense has emerged as a concrete example. This technical report applies a quantitative risk modeling methodology (described in full in a companion paper) to this domain. We develop nine detailed cyber risk models that allow analyzing AI uplift as a function of AI benchmark performance. Each model decomposes attacks into steps using the MITRE ATT&CK framework and estimates how AI affects the number of attackers, attack frequency, probability of success, and resulting harm to determine different types of uplift. To produce these estimates with associated uncertainty, we employ both human experts, via a Delphi study, as well as LLM-based simulated experts, both mapping benchmark scores (from Cybench and BountyBench) to risk model factors. Individual estimates are aggregated through Monte Carlo simulation. The results indicate systematic uplift in attack efficacy, speed, and target reach, with different mechanisms of uplift across risk models. We aim for our quantitative risk modeling to fulfill several aims: to help cybersecurity teams prioritize mitigations, AI evaluators design benchmarks, AI developers make more informed deployment decisions, and policymakers obtain information to set risk thresholds. Similar goals drove the shift from qualitative to quantitative assessment over time in other high-risk industries, such as nuclear power. We propose this methodology and initial application attempt as a step in that direction for AI risk management. While our estimates carry significant uncertainty, publishing detailed quantified results can enable experts to pinpoint exactly where they disagree. This helps to collectively refine estimates, something that cannot be done with qualitative assessments alone.
💡 Research Summary
The paper presents a pioneering effort to quantitatively model cybersecurity risks that arise from the misuse of advanced AI systems. Recognizing that AI‑enabled cyber offense has become a concrete threat in 2025, the authors develop a structured methodology—originally detailed in companion papers—and apply it to nine representative attack scenarios. Each scenario is decomposed into steps using the MITRE ATT&CK framework, allowing the authors to define four core risk factors: number of attackers, attack frequency, probability of success, and damage per successful attack.
Baseline risk models (i.e., with negligible AI assistance) are built first, drawing on threat‑intelligence data, historical incidents, and expert review. These baselines are encoded as Bayesian networks, providing a probabilistic foundation for later uplift calculations.
To capture AI‑driven “uplift,” the authors identify two key risk indicators (KRIs): benchmark performance scores from Cybench and BountyBench. They then map these scores to changes in each risk factor. This mapping is obtained through a Delphi process that combines two sources of expertise: (1) a panel of nine human cybersecurity experts who provide point estimates and confidence intervals over two rounds, and (2) simulated experts generated by large language models (LLMs) using carefully crafted prompts. Human experts discuss their estimates between rounds, while LLM estimators produce scalable, repeatable assessments. The study finds that LLMs closely match human judgments for probability‑type factors but tend to be more conservative for quantity‑type factors (e.g., number of actors, attempts per actor).
All individual estimates are fitted to appropriate probability distributions (beta, gamma, normal, etc.) and injected into the Bayesian networks. Monte Carlo simulation (≈100 000 draws) propagates uncertainty through the network, yielding overall risk distributions and the contribution of each factor.
Key findings include: • Even at current state‑of‑the‑art (SOTA) AI capabilities, total expected harm is higher than the baseline, indicating a measurable uplift. • At a “saturation” point—when AI can reliably perform all benchmarked tasks—the uplift becomes substantially larger. • The uplift is not uniform across attacker skill levels; AI benefits both low‑skill and high‑skill actors, though the pattern varies by scenario. • No single risk factor dominates; instead, AI simultaneously boosts the number of actors, the frequency of attacks, success probabilities, and per‑attack damage, reflecting both “quantity” and “quality” effects. • Among the MITRE ATT&CK tactics, Execution, Impact, and Initial Access show the greatest AI‑driven uplift, but all 14 tactics exhibit some increase.
Uncertainty analysis reveals that human experts express higher variance, especially for quantity‑type factors and for more difficult benchmark tasks. LLM estimators report narrower confidence intervals, suggesting possible under‑estimation of real‑world variability. The authors discuss the implications of these differences for the reliability of automated expert elicitation.
The paper also candidly addresses limitations: limited empirical data on AI‑augmented attacks, potential subjectivity in Bayesian network design, dependence on the quality of benchmark scores, and the nascent nature of LLM‑based elicitation. Consequently, the authors caution against using the specific numerical outputs for operational decision‑making at this stage.
In conclusion, the study demonstrates that a systematic, probabilistic risk‑modeling pipeline can translate AI capability benchmarks into concrete, forward‑looking cyber‑risk estimates. This approach promises multiple benefits: better prioritization of defensive measures, guidance for benchmark developers on where new tests would most reduce uncertainty, more informed deployment decisions for AI developers, and a quantitative basis for regulators to set risk thresholds. Future work should focus on enriching data sources, refining LLM elicitation techniques, expanding the scenario set, and integrating real‑time threat intelligence to create dynamic, adaptive risk models.
Comments & Academic Discussion
Loading comments...
Leave a Comment