End-to-End Training for Unified Tokenization and Latent Denoising

End-to-End Training for Unified Tokenization and Latent Denoising
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Latent diffusion models (LDMs) enable high-fidelity synthesis by operating in learned latent spaces. However, training state-of-the-art LDMs requires complex staging: a tokenizer must be trained first, before the diffusion model can be trained in the frozen latent space. We propose UNITE - an autoencoder architecture for unified tokenization and latent diffusion. UNITE consists of a Generative Encoder that serves as both image tokenizer and latent generator via weight sharing. Our key insight is that tokenization and generation can be viewed as the same latent inference problem under different conditioning regimes: tokenization infers latents from fully observed images, whereas generation infers them from noise together with text or class conditioning. Motivated by this, we introduce a single-stage training procedure that jointly optimizes both tasks via two forward passes through the same Generative Encoder. The shared parameters enable gradients to jointly shape the latent space, encouraging a “common latent language”. Across image and molecule modalities, UNITE achieves near state of the art performance without adversarial losses or pretrained encoders (e.g., DINO), reaching FID 2.12 and 1.73 for Base and Large models on ImageNet 256 x 256. We further analyze the Generative Encoder through the lenses of representation alignment and compression. These results show that single stage joint training of tokenization & generation from scratch is feasible.


💡 Research Summary

The paper introduces UNITE (Unified Tokenization and latent dEnoising), a novel architecture that collapses the traditionally staged pipeline of latent diffusion models into a single end‑to‑end training process. In conventional LDMs, a tokenizer (often a VAE or VQ‑VAE encoder) is first trained to compress images into discrete or continuous latent tokens, then frozen while a diffusion model learns to generate in that latent space. This separation prevents the generation objective from influencing the representation learned by the tokenizer, leading to sub‑optimal latent spaces and added engineering complexity.

UNITE addresses this by defining a single Generative Encoder (GE_θ) that serves both as the tokenizer and as the denoiser. GE_θ operates in two modes: (i) tokenization, where it maps an input image x to a clean latent sequence z₀ = GE_θ(x); (ii) generation, where it receives a noisy latent z_t (produced by corrupting z₀ with Gaussian noise at a chosen diffusion step t) together with the timestep embedding, and predicts the denoised latent \hat{z₀} = GE_θ(z_t, t). The same parameters θ are shared across both passes, so gradients from reconstruction loss (pixel‑space L_rec between the decoded image D_ϕ(z₀) and the original x) and from a flow‑matching denoising loss (L_flow between \hat{z₀} and the target clean latent) jointly shape the encoder.

Training proceeds with two forward passes per sample: first encode the image, then corrupt and re‑encode the latent. No adversarial losses, no pretrained visual encoders (e.g., DINO, MAE), and no separate KL regularization are used. The flow‑matching objective replaces the classic score‑matching loss, providing stable training across diffusion steps and allowing the model to learn an efficient mapping from Gaussian noise to the latent manifold.

Empirically, UNITE achieves near‑state‑of‑the‑art image synthesis on ImageNet‑256: FID 2.12 for the Base model and 1.73 for the Large model, comparable to or better than multi‑stage pipelines that rely on large pretrained tokenizers. The approach also generalizes to molecular graph generation, where it matches or exceeds the quality of VAE‑based tokenizers followed by diffusion.

A series of ablations explore the role of weight sharing. Even when the encoder and denoiser are trained with separate parameters, layer‑wise centered kernel alignment (CKA) shows strong representational similarity, indicating that tokenization and denoising are intrinsically compatible tasks. Nonetheless, full parameter sharing yields the best trade‑off between reconstruction fidelity (rFID) and generation quality (gFID), supporting the authors’ hypothesis that a “common latent language” emerges when both objectives shape the same network.

The paper situates UNITE among related work: classic VAEs jointly learn reconstruction and a simple Gaussian prior but lack the expressive power of modern diffusion generators; VQ‑VAE/VQ‑GAN provide discrete latents but still require a downstream diffusion model; recent methods like REPA align diffusion features with pretrained SSL encoders to stabilize training, yet they re‑introduce external supervision and extra stages. UNITE’s contribution is a clean, single‑stage recipe that requires only two losses and a shared encoder‑decoder pair, dramatically simplifying the training pipeline while preserving high‑fidelity synthesis.

In summary, UNITE demonstrates that tokenization and latent denoising can be unified in one network trained end‑to‑end from scratch. This reduces computational overhead, eliminates the need for external teachers, and opens the door to more integrated multimodal generative systems. The work suggests a new design principle for future generative models: jointly optimize representation and generation within a shared latent space, rather than treating them as separate, sequential problems.


Comments & Academic Discussion

Loading comments...

Leave a Comment