Conservative Inference Rule for Uncertain Reasoning under Incompleteness

Conservative Inference Rule for Uncertain Reasoning under Incompleteness
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this paper we formulate the problem of inference under incomplete information in very general terms. This includes modelling the process responsible for the incompleteness, which we call the incompleteness process. We allow the process behaviour to be partly unknown. Then we use Walleys theory of coherent lower previsions, a generalisation of the Bayesian theory to imprecision, to derive the rule to update beliefs under incompleteness that logically follows from our assumptions, and that we call conservative inference rule. This rule has some remarkable properties: it is an abstract rule to update beliefs that can be applied in any situation or domain; it gives us the opportunity to be neither too optimistic nor too pessimistic about the incompleteness process, which is a necessary condition to draw reliable while strong enough conclusions; and it is a coherent rule, in the sense that it cannot lead to inconsistencies. We give examples to show how the new rule can be applied in expert systems, in parametric statistical inference, and in pattern classification, and discuss more generally the view of incompleteness processes defended here as well as some of its consequences.


💡 Research Summary

**
The paper tackles the pervasive problem of inference when the data available for analysis are incomplete or coarsened. Traditional approaches either assume that the incompleteness process (IP) is “missing at random” (MAR) or “coarsening at random” (CAR), which allows the analyst to ignore the missingness and apply standard Bayesian updating, or they adopt a fully conservative updating rule (CUR) that treats every possible completion of the missing data as equally plausible. The former is often unrealistic because in many real‑world situations the mechanism that generates missingness is selective and informative; the latter, while safe, is overly pessimistic and discards useful information when parts of the IP are known to be random.

The authors propose a more nuanced model of the IP: it is decomposed into two components. The first component is a CAR process, for which the usual random‑missingness assumptions hold. The second component is “unknown”: it may be selective, may coarsen observations into sets, or may behave in any way that is not captured by CAR. By separating the IP in this way, the model can represent a wide variety of practical situations, from sensor failures that are random to communication protocols that deliberately hide certain attributes.

To reason under this mixed IP, the authors employ Peter Walley’s theory of coherent lower previsions, a formalism for imprecise probabilities. Instead of a single probability distribution, beliefs are represented by a closed convex set of distributions (a credal set). A lower prevision is the infimum of expected values over this set, and coherence guarantees that the assessments avoid sure loss and satisfy rationality axioms. Using this framework, the authors derive the Conservative Inference Rule (CIR).

CIR works as follows. Let (Y) be the latent (true) data, (W) the observed (possibly coarsened) data, and (Z) the quantity of interest. For the CAR part of the IP, standard Bayesian conditioning on the observed components of (W) is performed. For the unknown part, the observed (W) defines a set (\mathcal{C}) of all latent values (y) that are compatible with the observation (i.e., all completions of the coarsened data). The posterior lower and upper probabilities of any event concerning (Z) are then obtained by taking the minimum and maximum, respectively, of the conditional probabilities (P(Z|y)) over all (y\in\mathcal{C}): \


Comments & Academic Discussion

Loading comments...

Leave a Comment