Choice via AI

Choice via AI
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper proposes a model of choice via agentic artificial intelligence (AI). A key feature is that the AI may misinterpret a menu before recommending what to choose. A single acyclicity condition guarantees that there is a monotonic interpretation and a strict preference relation that together rationalize the AI’s recommendations. Since this preference is in general not unique, there is no safeguard against it misaligning with that of a decision maker. What enables the verification of such AI alignment is interpretations satisfying double monotonicity. Indeed, double monotonicity ensures full identifiability and internal consistency. But, an additional idempotence property is required to guarantee that recommendations are fully rational and remain grounded within the original feasible set.


💡 Research Summary

The paper develops a formal model of choice when a decision‑maker (DM) consults an artificial‑intelligence (AI) agent that may misinterpret the menu of available alternatives. The authors introduce an “interpretation operator” I that maps each observed choice problem S (a non‑empty subset of a finite set X) into a possibly distorted set I(S). The only requirement on I is monotonicity (IM): if S⊆T then I(S)⊆I(T). The AI then selects the most preferred element from I(S) according to a strict preference relation ≻ on X. A choice function c is said to be an Agentic AI Choice (AIC) if there exist I and ≻ such that c(S) = argmax_{x∈I(S)} ≻ x for every S.

The first major result shows that a single behavioral axiom, No Shifted Cycles (NSC), exactly characterizes AICs. NSC is a variant of classic acyclicity: for any chain of subsets S_i⊂T_i with c(S_i)=x_i and c(T_i)=x_{i+1}, it must never happen that c(T_{n+1})=x_1. Theorem 3 proves that any choice function satisfying NSC can be represented by some monotone I and strict ≻, and conversely any such representation yields a function obeying NSC. Thus, even when the AI misinterprets menus, the observed choices are rationalizable as long as they avoid shifted cycles.

Next, the authors examine identifiability. From NSC alone, the underlying preference ≻ is not unique. They define a “revealed preference” relation: x ≻ y if there exist subsets S⊂T with c(T)=x and c(S)=y. Its transitive closure ≻* captures exactly the set of pairs that are preferred under every possible (I, ≻) representation (Proposition 5). Hence, the data reveal a partial order that is always acyclic, but many extensions of this order may still rationalize the same data.

Similarly, the interpretation operator is not uniquely pinned down by NSC. The authors construct a “revealed consideration” operator I* where x∈I*(T) iff x is chosen from some proper subset S⊂T. Proposition 7 shows that I* coincides with the true I for all representations consistent with the data. Nonetheless, multiple I’s can generate the same observed choices.

To achieve full identification, the paper introduces Double Monotonicity (TDM). TDM requires that the mapping I be an order‑isomorphism between the lattice of subsets of X and the lattice of interpreted sets: I(S)⊆I(T) ⇔ S⊆T. This stronger condition forces a unique interpretation operator, eliminating the ambiguity left by simple monotonicity. When a choice function satisfies NSC and its associated I meets TDM, the function is called a Rational AI Agent’s Choice (RAIC). RAIC thus combines the classic rationality conditions (no cycles, no choice reversals) with a perfectly aligned interpretation of the menu.

Finally, the authors argue that rationality alone is insufficient for alignment because an AI could still recommend an alternative that lies outside the feasible set X. They therefore add an Idempotence condition: I(I(S)) = I(S) for all S. Idempotence guarantees that the interpreted set is already “grounded” and that the AI’s recommendation never leaves the original feasible universe. With NSC, TDM, and Idempotence, the choice function satisfies the Weak Axiom of Revealed Preference (WARP), the standard test for rational choice in economics. Consequently, the AI’s recommendations are both internally consistent and externally aligned with the DM’s true preferences and the real-world constraints.

The paper situates its contributions within the broader literature on consideration sets, bounded rationality, sequential maximization, and AI alignment. It highlights that the proposed framework applies not only to LLM‑based recommendation systems but to any advisory setting where the advisor’s interpretation of the problem may be distorted. The authors conclude by suggesting extensions to stochastic interpretation operators, dynamic menus, and empirical validation of the axioms in human‑AI interaction experiments.


Comments & Academic Discussion

Loading comments...

Leave a Comment