Updating with incomplete observations

Updating with incomplete observations
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or set-valued). This is a fundamental problem, and of particular interest for Bayesian networks. Recently, Grunwald and Halpern have shown that commonly used updating strategies fail here, except under very special assumptions. We propose a new rule for updating probabilities with incomplete observations. Our approach is deliberately conservative: we make no or weak assumptions about the so-called incompleteness mechanism that produces incomplete observations. We model our ignorance about this mechanism by a vacuous lower prevision, a tool from the theory of imprecise probabilities, and we derive a new updating rule using coherence arguments. In general, our rule produces lower posterior probabilities, as well as partially determinate decisions. This is a logical consequence of the ignorance about the incompleteness mechanism. We show how the new rule can properly address the apparent paradox in the ‘Monty Hall’ puzzle. In addition, we apply it to the classification of new evidence in Bayesian networks constructed using expert knowledge. We provide an exact algorithm for this task with linear-time complexity, also for multiply connected nets.


💡 Research Summary

**
The paper revisits the long‑standing problem of updating probabilities when observations are incomplete, a question originally raised by Shafer in 1985. In many real‑world situations, data are not observed as single, precise outcomes but as sets of possible outcomes (e.g., missing sensor readings, ambiguous survey responses). Traditional Bayesian updating assumes that observations are exact and complete; under this assumption the posterior distribution is obtained by conditioning the prior on the observed event. Recent work by Grunwald and Halpern demonstrated that most commonly used updating strategies break down in the presence of incomplete observations unless very restrictive assumptions about the “incompleteness mechanism” are made (for example, that the mechanism is independent of the underlying state and always yields a single outcome).

To address this gap, the authors propose a new, deliberately conservative updating rule that makes no or only weak assumptions about the incompleteness mechanism. Their approach is grounded in the theory of imprecise probabilities, specifically the concept of a vacuous lower prevision. A vacuous lower prevision represents complete ignorance about the mechanism: it is the lower envelope of all possible probability distributions that could generate the observed data. By combining this vacuous prior with the observed set‑valued evidence through the principle of coherence, the authors derive a conditional lower prevision (or lower posterior probability) for any event of interest. Formally, for a prior lower prevision (\underline{P}) and an observed set (O), the lower posterior for an event (A) is defined as

\


Comments & Academic Discussion

Loading comments...

Leave a Comment