Updating Probabilities

Updating Probabilities
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a naive space', which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR (coarsening at random) in the statistical literature characterizes when naive’ conditioning in a naive space works. We show that the CAR condition holds rather infrequently. We then consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, but show that there are no such conditions for MRE. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.


💡 Research Summary

This paper provides a rigorous investigation into the foundational issues of updating probability distributions in light of new information. It begins by highlighting a critical problem: applying standard conditionalization in a “naive” probability space—one that ignores the protocol or mechanism by which the information was generated—can often lead to paradoxical or counterintuitive results, as famously illustrated by puzzles like Monty Hall.

The core analysis centers on a known criterion from the statistical literature called “Coarsening at Random” (CAR). The CAR condition characterizes precisely when naive conditionalization on a naive space yields correct results. A key contribution of the paper is demonstrating that the CAR condition holds rather infrequently. This finding underscores a significant limitation of the standard Bayesian update rule when the data-generation process is not properly accounted for, emphasizing that the context of information acquisition is not merely a subtlety but often a necessity for correct inference.

To address this limitation, the paper explores more generalized update rules. First, it examines Jeffrey conditioning, a rule that updates a probability distribution when the new evidence specifies probabilities for a partition of the space rather than the certainty of a single event. The authors develop a generalized CAR condition that delineates when Jeffrey conditioning provides appropriate answers that are insensitive to the underlying protocol. This offers a formal justification for using Jeffrey conditioning in certain classes of problems where naive conditioning fails.

Second, the paper analyzes the method of minimizing relative entropy (MRE), also known as the maximum entropy principle or Kullback-Leibler divergence minimization. This is a powerful and widely used technique for updating a prior distribution to satisfy new constraints while staying as “close” as possible to the prior. A crucial and somewhat negative result presented is that no condition analogous to CAR exists for MRE. In other words, there is no general, protocol-insensitive guarantee that MRE will produce the intuitively correct update. This serves as an important caveat for the application of entropy-based methods in statistical reasoning and machine learning.

The paper synthesizes and interconnects previous results on CAR and MRE within a unified framework. It moves from the specific failure of simple conditioning (governed by CAR) to the more robust but still conditional success of Jeffrey conditioning (governed by a generalized CAR), and finally to the lack of any such universal guarantee for MRE. This progression paints a nuanced picture of probabilistic updating, stressing that no single rule is universally optimal or context-free. The choice of update rule must be informed by the nature of the evidence and the process that produced it.

In summary, the work offers a deep philosophical and mathematical critique of standard probability updating, establishes the rarity of the CAR condition, provides a generalized criterion for Jeffrey conditioning, and reveals a fundamental limitation of the MRE approach. Its insights are essential for foundational work in statistics, artificial intelligence, decision theory, and any field relying on coherent reasoning under uncertainty.


Comments & Academic Discussion

Loading comments...

Leave a Comment