A GPU-accelerated Nonlinear Branch-and-Bound Framework for Sparse Linear Models
We study exact sparse linear regression with an $\ell_0-\ell_2$ penalty and develop a branch-and-bound (BnB) algorithm explicitly designed for GPU execution. Starting from a perspective reformulation, we derive an interval relaxation that can be solved by ADMM with closed-form, coordinate-wise updates. We structure these updates so that the main work at each BnB node reduces to batched matrix-vector operations with a shared data matrix, enabling fine-grained parallelism across coordinates and coarse-grained parallelism across many BnB nodes on a single GPU. Feasible solutions (upper bounds) are generated by a projected gradient method on the active support, implemented in a batched fashion so that many candidate supports are updated in parallel on the GPU. We discuss practical design choices such as memory layout, batching strategies, and load balancing across nodes that are crucial for obtaining good utilization on modern GPUs. On synthetic and real high-dimensional datasets, our GPU-based approach achieves clear runtime improvements over a CPU implementation of our method, an existing specialized BnB method, and commercial MIP solvers.
💡 Research Summary
This paper tackles the computationally hard problem of exact sparse linear regression with an ℓ₀‑ℓ₂ penalty, formulating it as a mixed‑integer second‑order cone program (MISOCP) and solving it with a novel branch‑and‑bound (BnB) framework that is explicitly engineered for modern GPUs. The authors first apply a perspective reformulation: binary variables z indicate whether a coefficient β_i is active, a large constant M bounds the magnitude of β, and auxiliary continuous variables s enforce the ridge term through rotated cone constraints β_i² ≤ s_i z_i. When the binary variables are relaxed to
Comments & Academic Discussion
Loading comments...
Leave a Comment