Preference-based Conditional Treatment Effects and Policy Learning
We introduce a new preference-based framework for conditional treatment effect estimation and policy learning, built on the Conditional Preference-based Treatment Effect (CPTE). CPTE requires only that outcomes be ranked under a preference rule, unlocking flexible modeling of heterogeneous effects with multivariate, ordinal, or preference-driven outcomes. This unifies applications such as conditional probability of necessity and sufficiency, conditional Win Ratio, and Generalized Pairwise Comparisons. Despite the intrinsic non-identifiability of comparison-based estimands, CPTE provides interpretable targets and delivers new identifiability conditions for previous unidentifiable estimands. We present estimation strategies via matching, quantile, and distributional regression, and further design efficient influence-function estimators to correct plug-in bias and maximize policy value. Synthetic and semi-synthetic experiments demonstrate clear performance gains and practical impact.
💡 Research Summary
The paper introduces a novel preference‑based framework for conditional treatment‑effect estimation and policy learning, centered on the Conditional Preference‑based Treatment Effect (CPTE). Traditional causal inference often relies on the Conditional Average Treatment Effect (CATE), which assumes outcomes can be compared via simple differences. However, many real‑world problems involve multivariate, ordinal, or hierarchical outcomes where a scalar difference is inadequate. To address this, the authors define a general preference function w(y, y′) that maps a pair of outcomes to a real‑valued preference score. This function subsumes several existing metrics: the probability of necessity and sufficiency (PNS), the Win Ratio for hierarchical outcomes, and Generalized Pairwise Comparisons.
The key obstacle is that the conditional individual treatment effect (ITE) defined as E
Comments & Academic Discussion
Loading comments...
Leave a Comment