Causally-Guided Diffusion for Stable Feature Selection
Feature selection is fundamental to robust data-centric AI, but most existing methods optimize predictive performance under a single data distribution. This often selects spurious features that fail under distribution shifts. Motivated by principles from causal invariance, we study feature selection from a stability perspective and introduce Causally-Guided Diffusion for Stable Feature Selection (CGDFS). In CGDFS, we formalized feature selection as approximate posterior inference over feature subsets, whose posterior mass favors low prediction error and low cross-environment variance. Our framework combines three key insights: First, we formulate feature selection as stability-aware posterior sampling. Here, causal invariance serves as a soft inductive bias rather than explicit causal discovery. Second, we train a diffusion model as a learned prior over plausible continuous selection masks, combined with a stability-aware likelihood that rewards invariance across environments. This diffusion prior captures structural dependencies among features and enables scalable exploration of the combinatorially large selection space. Third, we perform guided annealed Langevin sampling that combines the diffusion prior with the stability objective, which yields a tractable, uncertainty-aware posterior inference that avoids discrete optimization and produces robust feature selections. We evaluate CGDFS on open-source real-world datasets exhibiting distribution shifts. Across both classification and regression tasks, CGDFS consistently selects more stable and transferable feature subsets, which leads to improved out-of-distribution performance and greater selection robustness compared to sparsity-based, tree-based, and stability-selection baselines.
💡 Research Summary
The paper addresses a fundamental weakness of most feature‑selection techniques: they optimize predictive performance on a single training distribution and therefore often select spurious features that break under distribution shift. Inspired by causal invariance, the authors propose Causally‑Guided Diffusion for Stable Feature Selection (CGDFS), a framework that treats feature selection as approximate Bayesian posterior inference over feature subsets, where the posterior favours subsets with low average loss and low variance of loss across multiple environments.
Key components:
- Continuous mask representation – The binary selection vector z∈{0,1}^p is relaxed to a soft mask s∈
Comments & Academic Discussion
Loading comments...
Leave a Comment