Calibration and Transformation-Free Weight-Only LLMs Quantization via Dynamic Grouping

Calibration and Transformation-Free Weight-Only LLMs Quantization via Dynamic Grouping
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Large Language Models (LLMs) deliver strong performance but are difficult to deploy under tight memory and compute constraints. Low-bit post-training quantization (PTQ) is a promising direction; however, it typically relies on calibration data, auxiliary transformations, and GPU tools. To address these limitations, we propose MSB (Multi Scale Binary), a calibration-free and transformation-free PTQ method that generalizes binary quantization to multi-bit settings. MSB optimizes a dynamic grouping criterion that minimizes within group variance, yielding group-wise multiscale levels that can be applied consistently across granularities from per tensor to block-wise configurations with 64 elements groups per row, without calibration or intermediate transforms. We implement the optimization in a CPU based solver for the quantization step and evaluate using standard bfloat16 execution without low-bit packing. On Llama 3.2 3B, MSB achieves 8.43 perplexity on WikiText-2 under 4-bit weight only block-wise quantization, compared to 7.81 in full precision and 12.23 with GPTQ its default setup. Overall, MSB provides a new optimization perspective for low-bit PTQ while simplifying the pipeline by removing calibration and transformations.


💡 Research Summary

This paper introduces MSB (Multi Scale Binary), a novel post-training quantization (PTQ) method for Large Language Models that operates without calibration data or auxiliary transformations. The core challenge addressed is the efficient deployment of LLMs under tight memory constraints, specifically focusing on 4-bit weight-only quantization. Existing low-bit PTQ methods typically rely on calibration datasets to estimate layer-wise sensitivity or employ function-preserving reparameterizations (like rotations) to mitigate outliers, adding complexity and dependencies to the deployment pipeline.

MSB circumvents these requirements by generalizing the objective of 1-bit binary quantization to multi-bit settings. Instead of approximating an entire weight matrix with a single scale and binary values ({±α}), MSB dynamically partitions the matrix into multiple groups. Each group is approximated by its own binary codebook with a group-specific scale factor ({±α_i}). The optimization objective is to find a partition that minimizes the sum of within-group quantization error (equivalent to the variance of absolute values within the group) plus a regularization term that penalizes the inverse of group size to prevent excessive fragmentation.

To solve this combinatorial optimization problem efficiently, the authors propose a family of four algorithms offering different accuracy-runtime trade-offs: 1) Dynamic Grouping (DG): A dynamic programming approach that guarantees a global optimum but is computationally expensive for large matrices. 2) Greedy Grouping (GG): A heuristic that starts with singleton groups and iteratively merges the adjacent pair with the lowest merge cost. 3) Windowed Greedy Merging (WGM): A more efficient variant that first divides sorted weights into fixed-size windows and performs greedy merging only between these windows. 4) WGM with Local Optimization (WGM-LO): A hybrid method that uses equal-range binning for fast initialization followed by a lightweight stochastic local search to refine group boundaries.

A key advantage of MSB is its consistency across granularities. The same objective and algorithmic template can be applied for both per-tensor quantization (yielding effective ~6-bit compression) and block-wise quantization (with groups of 64 elements, achieving 4-bit). The method is implemented in a CPU-based solver, and evaluation is performed using standard bfloat16 execution without custom low-bit kernels, emphasizing pipeline simplicity.

Empirical results on the Llama 3.2 3B model demonstrate the effectiveness of MSB. Under 4-bit block-wise weight-only quantization, MSB achieves a perplexity of 8.43 on WikiText-2, which is remarkably close to the full precision baseline (7.81) and significantly superior to the strong calibration-based baseline GPTQ (12.23) under its default setup. This shows that high-quality low-bit quantization is possible using only the intrinsic information within the pretrained weights, without external calibration data or complex transformations. In summary, MSB provides a new optimization-centric perspective for PTQ that greatly simplifies the quantization pipeline while maintaining competitive accuracy.


Comments & Academic Discussion

Loading comments...

Leave a Comment