LL-GaussianMap: Zero-shot Low-Light Image Enhancement via 2D Gaussian Splatting Guided Gain Maps
Significant progress has been made in low-light image enhancement with respect to visual quality. However, most existing methods primarily operate in the pixel domain or rely on implicit feature representations. As a result, the intrinsic geometric structural priors of images are often neglected. 2D Gaussian Splatting (2DGS) has emerged as a prominent explicit scene representation technique characterized by superior structural fitting capabilities and high rendering efficiency. Despite these advantages, the utilization of 2DGS in low-level vision tasks remains unexplored. To bridge this gap, LL-GaussianMap is proposed as the first unsupervised framework incorporating 2DGS into low-light image enhancement. Distinct from conventional methodologies, the enhancement task is formulated as a gain map generation process guided by 2DGS primitives. The proposed method comprises two primary stages. First, high-fidelity structural reconstruction is executed utilizing 2DGS. Then, data-driven enhancement dictionary coefficients are rendered via the rasterization mechanism of Gaussian splatting through an innovative unified enhancement module. This design effectively incorporates the structural perception capabilities of 2DGS into gain map generation, thereby preserving edges and suppressing artifacts during enhancement. Additionally, the reliance on paired data is circumvented through unsupervised learning. Experimental results demonstrate that LL-GaussianMap achieves superior enhancement performance with an extremely low storage footprint, highlighting the effectiveness of explicit Gaussian representations for image enhancement.
💡 Research Summary
LL‑GaussianMap introduces a novel zero‑shot, unsupervised framework for low‑light image enhancement (LLIE) that leverages 2‑dimensional Gaussian Splatting (2DGS) as an explicit geometric representation. Traditional LLIE methods either operate directly on pixel grids or rely on implicit feature embeddings, which often ignore the continuous geometric priors of natural images, leading to edge blurring, texture loss, and artifact generation in severely dark regions.
The proposed pipeline reframes enhancement as a gain‑map generation problem guided by a set of Gaussian primitives. It consists of two tightly coupled stages.
Stage 1 – Structural Reconstruction: The low‑light input is optimized as a collection of adaptive 2D Gaussian primitives. Each primitive encodes position, covariance, opacity, and color, enabling high‑fidelity reconstruction of geometric structures such as edges and fine textures. After optimization, these primitives become fixed spatial anchors for the subsequent stage.
Stage 2 – Gain‑Map Synthesis: A compact illumination‑transformation dictionary is pre‑computed from a large corpus of paired low‑ and normal‑light images. The dictionary contains typical illumination atoms that capture diverse lighting conditions. A lightweight network (e.g., a shallow MLP) predicts, for each Gaussian primitive, a set of mixing coefficients that weight the dictionary atoms. These coefficients are rendered through the differentiable rasterization mechanism of 2DGS, producing a continuous, structure‑aware gain map. The final enhanced image is obtained by pixel‑wise multiplication of this gain map with the original low‑light image.
Key technical contributions include:
- Explicit Structure‑Guided Enhancement: By directly exploiting the geometric information encoded in 2DGS, the method preserves edge sharpness and suppresses artifacts that commonly arise in pixel‑domain approaches.
- Unified Enhancement Module: The combination of a data‑driven dictionary and Gaussian rasterization yields a continuous gain map that is both expressive and computationally efficient.
- Zero‑Shot Unsupervised Learning: No paired low‑/normal‑light data are required during training; the framework learns solely from unpaired low‑light images and the pre‑computed dictionary, dramatically improving generalization to real‑world scenarios.
Extensive experiments on public benchmarks (e.g., LOL, LIME, SID, MEF) demonstrate that LL‑GaussianMap outperforms state‑of‑the‑art supervised and unsupervised methods in PSNR, SSIM, and LPIPS, while delivering superior visual quality, especially in preserving fine details and avoiding over‑enhancement. The model’s storage footprint is exceptionally small—only the Gaussian parameters and dictionary coefficients need to be stored, resulting in a model size of a few tens of megabytes, suitable for mobile and embedded deployment.
Ablation studies explore the impact of the number of Gaussian primitives, dictionary size, and network depth, revealing a favorable trade‑off between performance and efficiency. The authors also discuss potential extensions to other low‑level vision tasks such as denoising, super‑resolution, and color correction, noting that the differentiable nature of 2DGS facilitates integration with physics‑based loss functions for more interpretable models.
In summary, LL‑GaussianMap pioneers the use of explicit 2D Gaussian splatting for low‑light enhancement, reformulating the problem as a structure‑aware gain‑map synthesis. By marrying geometric reconstruction with a learned illumination dictionary, it achieves high‑quality, artifact‑free enhancement without the need for paired training data, opening new avenues for efficient, structure‑preserving image restoration.
Comments & Academic Discussion
Loading comments...
Leave a Comment