On the elicitation of continuous, symmetric, unimodal distributions

On the elicitation of continuous, symmetric, unimodal distributions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this brief note, we highlight some difficulties that can arise when fitting a continuous, symmetric, unimodal distribution to a set of expert’s judgements. A simple analysis shows it is possible to fit a Cauchy distribution to an expert’s beliefs when their beliefs actually follow a normal distribution. This example stresses the need for careful distribution fitting and for feedback to the expert about what the fitted distribution implies about their beliefs.


💡 Research Summary

The paper by John Paul Gosling (June 2021) addresses a subtle but important problem in the elicitation of expert opinion for Bayesian prior specification: when an expert’s beliefs are summarized only by a few quantiles (median, quartiles, perhaps additional percentiles), many different continuous, symmetric, unimodal distributions can satisfy those constraints. The author demonstrates, with a concrete example, that an expert who truly holds a standard normal belief can be represented equally well by a Cauchy distribution—a heavy‑tailed alternative—if the fitting procedure relies solely on the reported quantiles.

The exposition begins by defining the elicitation context: an expert is asked to provide probabilities for the continuous variable X falling below certain thresholds. In the example, the expert supplies P(X < −0.6745)=0.25, P(X < 0)=0.5, and P(X < 0.6745)=0.75, which are exactly the quartiles of a standard normal distribution. Assuming symmetry and unimodality, the author notes that an infinite family of distributions can interpolate these three points. Figure 1 (as described) shows three such cumulative distribution functions (CDFs): a normal, a t‑distribution with 5 degrees of freedom, and a Cauchy. Their central portions coincide, but their tails differ dramatically.

To capture the inevitable imprecision in expert judgments, the paper introduces a second set of percentiles (10 % and 90 %) and models each reported percentile as a uniform interval of width 0.05 around the nominal value, reflecting a realistic “±0.05” confidence interval. By treating the expert’s statements as random variables with uniform uncertainty, the author propagates this uncertainty through the fitting process, producing Figure 2. Even under this uncertainty, any symmetric unimodal distribution can still be fitted, underscoring the identifiability problem.

The key insight is that without additional information about tail behavior, the choice of prior can be essentially arbitrary, and the consequences can be severe when data are scarce or when the prior dominates the posterior. The paper reviews existing methods for extracting tail information—such as eliciting tail‑ratios (Kadane et al., 1980) or using hypothetical data (Garthwaite & Al‑Awadhi, 2001)—and acknowledges their limitations.

To mitigate the risk of mis‑specifying the prior, the author recommends two practical strategies. First, collect more quantile judgments that probe deeper into the tails (e.g., 90th percentile, 95th percentile) or ask directly about tail ratios. Second, incorporate a feedback loop: after fitting a candidate distribution, present the expert with derived quantities (e.g., the implied tertiles, tail probabilities) and ask whether these align with their intuition. If not, the expert can revise the original judgments or provide additional ones. This iterative approach helps align the mathematical representation with the expert’s true beliefs.

The paper also emphasizes the importance of sensitivity analysis. By fitting several plausible distributions (normal, t, Cauchy) and examining how posterior inferences change, analysts can assess the robustness of conclusions to prior tail assumptions. When the analysis is insensitive, the heavy‑tailed choice may be harmless; when sensitivity is high, the analyst must either gather more expert information or adopt a more conservative prior specification.

In summary, Gosling’s note serves as a cautionary illustration that expert elicitation is inherently imprecise, especially regarding tail behavior. It calls for explicit modeling of judgment uncertainty, systematic feedback to experts, and thorough sensitivity checks. These recommendations aim to prevent the inadvertent adoption of priors—such as a Cauchy distribution—that misrepresent the expert’s actual beliefs and could lead to misleading Bayesian inferences.


Comments & Academic Discussion

Loading comments...

Leave a Comment