Accelerated Preference Elicitation with LLM-Based Proxies

Accelerated Preference Elicitation with LLM-Based Proxies
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Bidders in combinatorial auctions face significant challenges when describing their preferences to an auctioneer. Classical work on preference elicitation focuses on query-based techniques inspired from proper learning–often via proxies that interface between bidders and an auction mechanism–to incrementally learn bidder preferences as needed to compute efficient allocations. Although such elicitation mechanisms enjoy theoretical query efficiency, the amount of communication required may still be too cognitively taxing in practice. We propose a family of efficient LLM-based proxy designs for eliciting preferences from bidders using natural language. Our proposed mechanism combines LLM pipelines and DNF-proper-learning techniques to quickly approximate preferences when communication is limited. To validate our approach, we create a testing sandbox for elicitation mechanisms that communicate in natural language. In our experiments, our most promising LLM proxy design reaches approximately efficient outcomes with five times fewer queries than classical proper learning based elicitation mechanisms.


💡 Research Summary

In combinatorial auctions, bidders must express valuations over exponentially many bundles, a task that quickly becomes cognitively infeasible as the number of items grows. Classical preference‑elicitation mechanisms address this by using structured value and demand queries and applying proper‑learning techniques—most notably DNF‑learning proxies—to incrementally reconstruct each bidder’s valuation function. While these methods enjoy polynomial query complexity in theory, each query still imposes a substantial mental load on human participants, limiting practical deployment.

The authors propose a new family of proxies that embed large language models (LLMs) into the elicitation loop, allowing communication with bidders in natural language. An LLM‑powered proxy serves two purposes: (1) it interacts with the bidder through conversational value/demand queries, interpreting free‑form textual answers; (2) it assists the underlying DNF‑learning algorithm by generating candidate atomic bundles and estimating their values, thereby reducing the number of explicit valuation queries required. In this way, the LLM acts as a “smart” interface that translates human‑friendly dialogue into the structured information needed for proper learning.

Three proxy designs are introduced. “Drop‑in” LLM proxies retain the exact query interface of the baseline DNF proxy (value and demand queries) but replace the exhaustive search for new atomic bundles with LLM‑driven suggestions, cutting the O(n) valuation queries per bundle to a constant‑time inference. “Hybrid” proxies augment the interaction with open‑ended natural‑language questions, enabling bidders to describe preferences more richly; the LLM parses these descriptions to refine the XOR‑bid representation. Finally, a “Simulation‑LLM” is employed in experiments to model human bidders, ensuring reproducible benchmarking while preserving the ambiguity of natural‑language preference articulation.

Experiments are conducted within the Competitive Equilibrium Combinatorial Auction (CECA) framework. The auctioneer repeatedly collects candidate XOR‑bids from each proxy, computes a competitive equilibrium via integer linear programming, and asks unsatisfied bidders for additional information. The LLM proxies maintain a transcript of all prior exchanges and use it, together with the current candidate XOR‑bid, to decide whether to accept the current allocation or to update the bid with a newly inferred atomic bundle.

Results show that the best LLM‑based proxy achieves approximately the same social welfare as the classical DNF proxy while using up to five times fewer queries across a range of query budgets. The reduction stems from the LLM’s ability to infer bundle values from conversational context, thereby sparing bidders from repeatedly providing numeric valuations. Moreover, the natural‑language interface markedly lowers the perceived cognitive burden, suggesting that LLMs can make preference elicitation more user‑friendly without sacrificing allocative efficiency.

The paper contributes (1) a novel integration of LLMs with proper‑learning‑based elicitation, (2) a reproducible simulation pipeline that models human bidders via LLMs, and (3) empirical evidence that language‑driven inference can dramatically improve query efficiency in combinatorial auctions. Limitations include reliance on simulated bidders rather than real users, the unexamined impact of LLM response variability or bias on valuation accuracy, and the absence of mechanisms to guard against strategic misreporting. Future work should involve user studies with actual participants, systematic prompt engineering to improve consistency, and incentive‑compatible designs that mitigate strategic manipulation while preserving the benefits of natural‑language interaction.


Comments & Academic Discussion

Loading comments...

Leave a Comment