Nonparametric LLM Evaluation from Preference Data
Evaluating the performance of large language models (LLMs) from human preference data is crucial for obtaining LLM leaderboards. However, many existing approaches either rely on restrictive parametric assumptions or lack valid uncertainty quantification when flexible machine learning methods are used. In this paper, we propose a nonparametric statistical framework, DMLEval, for comparing and ranking LLMs from preference data using debiased machine learning (DML). For this, we introduce generalized average ranking scores (GARS), which generalize commonly used ranking models, including the Bradley-Terry model or PageRank/ Rank centrality, with complex human responses such as ties. DMLEval comes with the following advantages: (i) It produces statistically efficient estimates of GARS ranking scores. (ii) It naturally allows the incorporation of black-box machine learning methods for estimation. (iii) It can be combined with pre-trained LLM evaluators (e.g., using LLM-as-a-judge). (iv) It suggests optimal policies for collecting preference data under budget constraints. We demonstrate these advantages both theoretically and empirically using both synthetic and real-world preference datasets. In summary, our framework provides practitioners with powerful, state-of-the-art methods for comparing or ranking LLMs.
💡 Research Summary
The paper introduces DMLEval, a non‑parametric statistical framework for evaluating and ranking large language models (LLMs) using pairwise preference data collected from humans or automated judges. Existing leaderboard methods typically rely on the parametric Bradley‑Terry (BT) model, which can be biased when its link function is misspecified and does not readily accommodate flexible machine‑learning estimators, leading to invalid confidence intervals. DMLEval addresses these shortcomings by defining the ranking target directly as a functional of the underlying preference probabilities, thereby avoiding any parametric assumptions about how scores generate preferences.
The authors formalize the data‑generating process with context X, selection indicators S_{jk} (whether a comparison between models j and k is observed), and categorical preference labels Y_{jk}^c (e.g., “j preferred”, “k preferred”, “tie”). The conditional preference probabilities μ_{jk}^c(x)=P(Y_{jk}^c=1|X=x) and selection probabilities π_{jk}(x)=P(S_{jk}=1|X=x) are treated as nuisance functions. The central estimand is the Generalized Average Ranking Score (GARS): θ = E
Comments & Academic Discussion
Loading comments...
Leave a Comment