Vulnerability Assessment Combining CVSS Temporal Metrics and Bayesian Networks
Vulnerability assessment is a critical challenge in cybersecurity, particularly in industrial environments. This work presents an innovative approach by incorporating the temporal dimension into vulnerability assessment, an aspect neglected in existing literature. Specifically, this paper focuses on refining vulnerability assessment and prioritization by integrating Common Vulnerability Scoring System (CVSS) Temporal Metrics with Bayesian Networks to account for exploit availability, remediation efforts, and confidence in reported vulnerabilities. Through probabilistic modeling, Bayesian networks enable a structured and adaptive evaluation of vulnerabilities, allowing for more accurate prioritization and decision-making. The proposed approach dynamically computes the Temporal Score and updates the CVSS Base Score by processing data on exploits and fixes from vulnerability databases.
💡 Research Summary
The paper addresses a notable gap in vulnerability assessment for industrial environments by integrating the temporal dimension of the Common Vulnerability Scoring System (CVSS) with Bayesian Networks (BN). While CVSS version 3.1 already defines three metric groups—Base, Temporal, and Environmental—the Temporal group (Exploit Code Maturity, Remediation Level, Report Confidence) is rarely used in practice, and existing literature seldom combines it with probabilistic risk models.
The authors propose a two‑stage methodology. First, they develop an automated script that ingests a list of CVE identifiers, queries the National Vulnerability Database (NVD) for the Base Score, and then interrogates three exploit repositories (Metasploit, Exploit‑DB, PacketStorm) to determine the Exploit Code Maturity (ECM) value. The script also checks OpenVAS for remediation information to assign a Remediation Level (RL) and uses the CVSS‑provided confidence metric (RC). The Temporal Score is calculated as:
Temporal Score = RoundUp(Base Score × ECM × RL × RC)
where RoundUp rounds to the first decimal place. This pipeline yields a dynamic, up‑to‑date severity figure that reflects the current state of exploit availability, patch deployment, and reporting confidence.
Second, the authors embed these scores into a Bayesian Attack Graph (BAG), a directed acyclic graph where each node represents a specific vulnerability and edges encode causal attack progression. Conditional Probability Tables (CPTs) are constructed by normalizing the CVSS score (Base or Temporal) to a 0‑1 range (P = Score/10). Bayesian inference then propagates evidence (e.g., a patch being applied) throughout the graph, updating the compromise probability of all downstream nodes.
The approach is demonstrated on a simulated industrial control system (ICS) modeled after the Purdue Enterprise Reference Architecture. The network is divided into five layers—from perimeter firewalls down to programmable logic controllers (PLCs). Twelve representative CVEs are identified across the layers, and both their Base Scores and computed Temporal Scores are listed. In most cases the Temporal Score is lower, reflecting mitigation actions or the lack of publicly available exploits. When the BAG is evaluated using Temporal Scores, the inferred exploitation probabilities for critical PLC nodes drop significantly compared to using only Base Scores, illustrating the practical benefit of incorporating time‑sensitive information.
Key contributions include: (1) an end‑to‑end, reproducible pipeline for extracting and weighting CVSS Temporal metrics from multiple public data sources; (2) a novel mapping of CVSS scores to Bayesian CPTs, enabling dynamic risk assessment; (3) a concrete case study that shows how temporal adjustments can materially affect prioritization decisions in an industrial context.
However, the study has limitations. The CPT construction is overly simplistic—using a linear scaling of the score—so it does not capture complex inter‑vulnerability dependencies or multi‑stage exploit chains. The reliance on a fixed set of external databases raises concerns about data freshness; missing or delayed exploit disclosures could skew ECM values. The evaluation is confined to a simulated environment; real‑world deployment would need to address integration with existing security information and event management (SIEM) systems and handle noisy evidence. Finally, the paper provides limited detail on the exact BN structure learning process, which hampers reproducibility.
Future work could explore richer Bayesian models that incorporate multi‑state variables, learn CPT parameters from historical incident data, and validate the framework in live industrial settings. Extending the approach to incorporate Environmental metrics and to automate evidence ingestion from threat‑intel feeds would further enhance its utility as a decision‑support tool for security operations centers managing critical infrastructure.
Comments & Academic Discussion
Loading comments...
Leave a Comment