Asymptotics of constrained $M$-estimation under convexity
M-estimation, aka empirical risk minimization, is at the heart of statistics and machine learning: Classification, regression, location estimation, etc. Asymptotic theory is well understood when the loss satisfies some smoothness assumptions and its derivatives are dominated locally. However, these conditions are typically technical and can be too restrictive or heavy to check. Here, we consider the case of a convex loss function, which may not even be differentiable: We establish an asymptotic theory for M-estimation with convex loss (which needs not be differentiable) under convex constraints. We show that the asymptotic distributions of the corresponding M-estimators depend on an interplay between the loss function and the boundary structure of the set of constraints. We extend our results to U-estimators, building on the asymptotic theory of U-statistics. Applications of our work include, among other, robust location/scatter estimation, estimation of deepest points relative to depth functions such as Oja’s depth, etc.
💡 Research Summary
This paper, “Asymptotics of constrained M-estimation under convexity” by Victor-Emmanuel Brunel, develops a comprehensive asymptotic theory for M-estimators (empirical risk minimizers) where the loss function is convex but not necessarily differentiable, and the optimization is subject to convex constraints. The core achievement is the establishment of limit distributions for these estimators while bypassing traditional technical assumptions like smoothness and local dominance of derivatives, which are often restrictive and hard to verify.
The introduction frames M-estimation as minimizing an empirical risk Φ_n(θ) = (1/n)Σϕ(X_i, θ) over a convex constraint set Θ, to approximate the minimizer of the population risk Φ(θ)=E
Comments & Academic Discussion
Loading comments...
Leave a Comment