Covering Numbers for Deep ReLU Networks with Applications to Function Approximation and Nonparametric Regression
Covering numbers of (deep) ReLU networks have been used to characterize approximation-theoretic performance, to upper-bound prediction error in nonparametric regression, and to quantify classification capacity. These results rely on covering number upper bounds obtained via explicit constructions of coverings. Lower bounds on covering numbers do not appear to be available in the literature. The present paper fills this gap by deriving tight (up to multiplicative constants) lower and upper bounds on the metric entropy (i.e., the logarithm of the covering numbers) of fully connected networks with bounded weights, sparse networks with bounded weights, and fully connected networks with quantized weights. The tightness of these bounds yields a fundamental understanding of the impact of sparsity, quantization, bounded versus unbounded weights, and network output truncation. Moreover, the bounds allow one to characterize fundamental limits of neural network transformation, including network compression, and lead to sharp upper bounds on the prediction error in nonparametric regression through deep networks. In particular, we remove a $\log^6(n)$-factor from the best known sample complexity rate for estimating Lipschitz functions via deep networks, thereby establishing optimality. Finally, we identify a systematic relation between optimal nonparametric regression and optimal approximation through deep networks, unifying numerous results in the literature and revealing underlying general principles.
💡 Research Summary
The paper addresses a fundamental gap in the theory of deep ReLU neural networks by establishing tight (up to constant factors) lower and upper bounds on the metric entropy—i.e., the logarithm of covering numbers—of several important network classes. While covering‑number upper bounds have been widely used to derive approximation rates, generalization error bounds, and capacity measures, corresponding lower bounds were missing. The authors fill this void for (i) fully‑connected networks with uniformly bounded weights, (ii) sparse networks with bounded weights, (iii) fully‑connected networks with quantized weights, and (iv) fully‑connected networks with unbounded weights but truncated outputs.
The central technical result (Theorem 2.1) shows that for any norm (L^{p}) with (p\in
Comments & Academic Discussion
Loading comments...
Leave a Comment