Numerical Sensitivity and Efficiency in the Treatment of Epistemic and Aleatory Uncertainty
The treatment of both aleatory and epistemic uncertainty by recent methods often requires an high computational effort. In this abstract, we propose a numerical sampling method allowing to lighten the computational burden of treating the information by means of so-called fuzzy random variables.
💡 Research Summary
The paper addresses the well‑known computational burden associated with jointly treating aleatory (random) and epistemic (knowledge‑based) uncertainties using fuzzy random variables. Traditional hybrid approaches first propagate the aleatory variables through Monte‑Carlo simulation and then, for each Monte‑Carlo sample, evaluate the epistemic variables by applying the fuzzy extension principle over a set of α‑cuts. When a modest number of α‑cuts (e.g., 20) is used, the total number of interval calculations quickly escalates (e.g., 100 Monte‑Carlo samples × 20 α‑cuts = 2 100 interval evaluations), which is often prohibitive for complex models such as those used in nuclear safety.
To overcome this limitation, the authors propose the Random Fuzzy (RaFu) method. The key innovation is to move a decision step before uncertainty propagation, allowing a decision maker (DM) to specify a triplet of parameters (γ_S, γ_E, γ_A):
- γ_S – the statistical quantity of interest for the aleatory part (e.g., a specific percentile).
- γ_E – the fuzzy quantity that defines how epistemic uncertainty is represented (e.g., all α‑cuts, a single random α‑cut, or the extreme α = 0 and α = 1 cuts).
- γ_A – the desired numerical accuracy or confidence level for the final result (e.g., 99 % coverage).
Given these specifications, RaFu uses order‑statistics results, notably Wilks’ formula, to compute the minimal Monte‑Carlo sample size required to achieve the requested γ_A. The method then samples the epistemic variables according to γ_E: if γ_E = “all α‑cuts”, a single random α‑cut per Monte‑Carlo run suffices to converge to the mean; if γ_E = {0, 1}, two separate runs are performed for the most optimistic and most pessimistic cases. Consequently, the number of model evaluations drops dramatically—from 2 100 to 100 for the mean‑based approach, and from 2 100 to 200 for the optimistic/pessimistic pair.
RaFu also integrates numerical accuracy directly into the propagation process, a feature rarely discussed in prior hybrid uncertainty literature. By explicitly controlling γ_A, practitioners can quantify the sampling error and decide whether additional simulations are needed, thereby aligning computational effort with the required confidence in the results.
The authors note that existing post‑processing schemes (Baudrit et al.’s mean of cumulated distributions and Ferson & Ginzburg’s optimistic/pessimistic pair) can be recovered simply by appropriate choices of γ_E within the RaFu framework. Thus, RaFu serves as a unifying, more efficient umbrella for hybrid uncertainty analysis.
Implemented in the SUNSET software at the French Institute for Radiological Protection and Nuclear Safety (IRSN), the RaFu method has already been applied to realistic nuclear safety models. The paper promises a full algorithmic description and a convergence proof in an extended version.
In summary, the RaFu method offers: (1) a pre‑propagation decision‑making step that tailors sampling effort to the analyst’s objectives; (2) a rigorous, order‑statistics‑based determination of minimal sample size to meet prescribed accuracy; (3) a substantial reduction in computational cost while preserving the ability to reconstruct traditional fuzzy‑probabilistic results; and (4) the first systematic incorporation of numerical accuracy into hybrid aleatory‑epistemic uncertainty propagation. This contribution is poised to make hybrid uncertainty quantification more practical for high‑stakes engineering domains.
Comments & Academic Discussion
Loading comments...
Leave a Comment