Accuracy of Uniform Inference on Fine Grid Points

Accuracy of Uniform Inference on Fine Grid Points
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Uniform confidence bands for functions are widely used in empirical analysis. A variety of simple implementation methods (most notably multiplier bootstrap) have been proposed and theoretically justified. However, an implementation over a literally continuous index set is generally computationally infeasible, and practitioners therefore compute the critical value by evaluating the statistic on a finite evaluation grid. This paper quantifies how fine the evaluation grid must be for a multiplier bootstrap procedure over finite grid points to deliver valid uniform confidence bands. We derive an explicit bound on the resulting coverage error that separates discretization effects from the intrinsic high-dimensional bootstrap approximation error on the grid. The bound yields a transparent workflow for choosing the grid size in practice, and we illustrate the implementation through an example of kernel density estimation.


💡 Research Summary

The paper addresses a practical problem that arises whenever uniform confidence bands for an unknown function are constructed using multiplier bootstrap methods. While the theory guarantees that the conditional (1 – α) quantile of the supremum of the studentized bootstrap process over a continuous index set yields asymptotically correct coverage, evaluating this supremum is computationally infeasible. Practitioners therefore replace the continuous supremum by the maximum over a finite evaluation grid and compute the critical value from the grid‑based bootstrap statistics. The authors ask: how fine must the grid be for the resulting confidence band to retain its theoretical validity?

The authors consider a generic non‑parametric estimator of the form
(\hat f_{h_n}(x)=n^{-1}\sum_{i=1}^n\psi_{h_n}(X_i,x))
with i.i.d. data ({X_i}{i=1}^n) and a smoothing parameter (h_n). The studentized statistic is
(T_n(x)={\hat f
{h_n}(x)-E


Comments & Academic Discussion

Loading comments...

Leave a Comment