Surrogate to Poincaré inequalities on manifolds for dimension reduction in nonlinear feature spaces
We aim to approximate a continuously differentiable function $u:\mathbb{R}^d \rightarrow \mathbb{R}$ by a composition of functions $f\circ g$ where $g:\mathbb{R}^d \rightarrow \mathbb{R}^m$, $m\leq d$, and $f : \mathbb{R}^m \rightarrow \mathbb{R}$ are built in a two stage procedure. For a fixed $g$, we build $f$ using classical regression methods, involving evaluations of $u$. Recent works proposed to build a nonlinear $g$ by minimizing a loss function $\mathcal{J}(g)$ derived from Poincaré inequalities on manifolds, involving evaluations of the gradient of $u$. A problem is that minimizing $\mathcal{J}$ may be a challenging task. Hence in this work, we introduce new convex surrogates to $\mathcal{J}$. Leveraging concentration inequalities, we provide suboptimality results for a class of functions $g$, including polynomials, and a wide class of input probability measures. We investigate performances on different benchmarks for various training sample sizes. We show that our approach outperforms standard iterative methods for minimizing the training Poincaré inequality based loss, often resulting in better approximation errors, especially for small training sets and $m=1$.
💡 Research Summary
The paper addresses the problem of approximating a smooth high‑dimensional function u:ℝᵈ→ℝ by a composition f∘g, where g:ℝᵈ→ℝᵐ (m ≤ d) is a feature map and f:ℝᵐ→ℝ is a regression function. The authors adopt a two‑stage strategy: first learn g, then fit f to the data. In recent work, the feature map g has been chosen by minimizing a loss J(g) derived from Poincaré inequalities on manifolds. This loss measures how well the gradient of u aligns with the subspace spanned by the Jacobian of g:
J(g)=E
Comments & Academic Discussion
Loading comments...
Leave a Comment