$μ$pscaling small models: Principled warm starts and hyperparameter transfer

$μ$pscaling small models: Principled warm starts and hyperparameter transfer
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Modern large-scale neural networks are often trained and released in multiple sizes to accommodate diverse inference budgets. To improve efficiency, recent work has explored model upscaling: initializing larger models from trained smaller ones in order to transfer knowledge and accelerate convergence. However, this method can be sensitive to hyperparameters that need to be tuned at the target upscaled model size, which is prohibitively costly to do directly. It remains unclear whether the most common workaround – tuning on smaller models and extrapolating via hyperparameter scaling laws – is still sound when using upscaling. We address this with principled approaches to upscaling with respect to model widths and efficiently tuning hyperparameters in this setting. First, motivated by $μ$P and any-dimensional architectures, we introduce a general upscaling method applicable to a broad range of architectures and optimizers, backed by theory guaranteeing that models are equivalent to their widened versions and allowing for rigorous analysis of infinite-width limits. Second, we extend the theory of $μ$Transfer to a hyperparameter transfer technique for models upscaled using our method and empirically demonstrate that this method is effective on realistic datasets and architectures.


💡 Research Summary

Modern deep learning pipelines often release families of models at multiple scales to accommodate different inference budgets. Smaller models enable rapid prototyping and scaling‑law analysis, while larger models are needed for final deployment. Training each size from scratch is wasteful, prompting interest in “model upscaling”: initializing a larger network from a trained smaller one so that the larger model inherits the smaller model’s knowledge. However, upscaled models are highly sensitive to hyperparameters such as learning rate, weight decay, and momentum. Practitioners typically tune these hyperparameters on the small model and extrapolate using scaling laws, but it is unclear whether this practice remains valid when upscaling is involved.

The paper provides a principled solution grounded in the μ‑Parameterization (μP) framework and any‑dimensional architecture theory. The first contribution is a general upscaling method that guarantees both static and dynamic equivalence between networks of different widths for a broad class of architectures and optimizers. Static equivalence means that a widened network, obtained by duplicating each weight matrix k times and scaling by 1/k, computes exactly the same function as the original network at initialization. Dynamic equivalence extends this guarantee throughout training: by appropriately rescaling per‑layer learning rates, weight‑decay coefficients, and any additional optimizer hyperparameters (e.g., Adam’s ε), the widened network follows an identical trajectory in function space. The authors prove these statements for vanilla SGD (Propositions 2.1 and 2.2) and then generalize to entrywise optimizers such as Adam and AdamW (Proposition 2.4), assuming the optimizer’s update rule is homogeneous of degree m. The required scaling rules are γ↑ = k^m · k^{‑1} γ for learning rates, ε↑ = k^{‑1} ε for auxiliary parameters, and λ↑ = k^{‑1} λ (or k^{‑1} k^{‑m} λ) for weight decay, where k is the width‑multiplication factor for each layer.

Building on this theory, the authors propose an upscaling algorithm. After training a small model, they construct a widened model by applying the weight‑duplication and scaling transformation. To avoid the widened network being trapped in the low‑dimensional subspace spanned by the duplicated parameters, they inject a small amount of Gaussian noise into each layer. The noise magnitude is chosen according to μP’s “optimal feature learning” condition, ensuring that the enlarged network can escape the subspace and exploit its additional capacity. This step is crucial: without noise, the training dynamics of the upscaled model mirror those of the small model and gain no benefit from the extra parameters.

The second major contribution is an extension of μ‑Transfer (μTransfer). Because the hyperparameter scaling rules are derived analytically, the optimal hyperparameters found on the small model can be transferred directly to the upscaled model without any additional tuning—a “zero‑shot hyperparameter transfer.” The authors validate this claim empirically across a variety of architectures (MLPs, ResNets, GPT‑2 transformers) and datasets (CIFAR‑10, ImageNet, WikiText‑103). In all cases, the upscaled models achieve comparable or better final performance than models trained from scratch, while requiring substantially fewer training steps and FLOPs. Notably, for large transformers the method reduces training cost by 30‑40 % and retains the same validation perplexity.

To provide a rigorous infinite‑width perspective, the paper leverages Tensor Programs machinery. By formulating the upscaled training dynamics as a modified tensor program, the authors prove that the equivalence results hold in the infinite‑width limit, and they characterize the limiting dynamics for common architectures. This opens the door to further theoretical analysis of upscaling, including potential extensions to depth‑wise scaling (which the authors leave for future work).

The experimental section details implementation specifics, including how the μP‑based widening is integrated into popular deep‑learning libraries, the choice of noise variance, and ablation studies that confirm the necessity of both weight scaling and noise injection. The authors also discuss limitations: the theory assumes homogeneous activation functions and may be less accurate for highly non‑linear or non‑smooth activations; extremely large width multipliers can introduce numerical instability; and the current framework focuses on width scaling rather than depth or architectural modifications such as mixture‑of‑experts.

In summary, the paper delivers a mathematically grounded, broadly applicable framework for scaling up neural networks from smaller pretrained checkpoints. By coupling weight‑duplication with analytically derived hyperparameter scaling and a modest symmetry‑breaking perturbation, it enables efficient warm‑starts and zero‑shot hyperparameter transfer, offering both practical speed‑ups for large‑scale training and a solid foundation for future theoretical work on model scaling.


Comments & Academic Discussion

Loading comments...

Leave a Comment