Characterizing the Predictive Impact of Modalities with Supervised Latent-Variable Modeling
Despite the recent success of Multimodal Large Language Models (MLLMs), existing approaches predominantly assume the availability of multiple modalities during training and inference. In practice, multimodal data is often incomplete because modalities may be missing, collected asynchronously, or available only for a subset of examples. In this work, we propose PRIMO, a supervised latent-variable imputation model that quantifies the predictive impact of any missing modality within the multimodal learning setting. PRIMO enables the use of all available training examples, whether modalities are complete or partial. Specifically, it models the missing modality through a latent variable that captures its relationship with the observed modality in the context of prediction. During inference, we draw many samples from the learned distribution over the missing modality to both obtain the marginal predictive distribution (for the purpose of prediction) and analyze the impact of the missing modalities on the prediction for each instance. We evaluate PRIMO on a synthetic XOR dataset, Audio-Vision MNIST, and MIMIC-III for mortality and ICD-9 prediction. Across all datasets, PRIMO obtains performance comparable to unimodal baselines when a modality is fully missing and to multimodal baselines when all modalities are available. PRIMO quantifies the predictive impact of a modality at the instance level using a variance-based metric computed from predictions across latent completions. We visually demonstrate how varying completions of the missing modality result in a set of plausible labels.
💡 Research Summary
The paper tackles a pervasive problem in multimodal learning: the frequent absence of one or more modalities during training and inference. While recent multimodal large language models (MLLMs) assume that all modalities are always available, real‑world scenarios—especially in healthcare—often involve incomplete data due to cost, risk, or asynchronous collection. The authors ask a precise question: for any given multimodal instance, how does a missing modality affect the model’s prediction? Rather than trying to reconstruct the missing input, they propose to model the information that the missing modality could contribute, using a continuous latent variable z.
Model Overview (PRIMO).
Each example consists of an observed modality xₒ, an optional additional modality xₘ (which may be missing), and a label y. PRIMO introduces a latent variable z∈ℝᵈ that captures the predictive content of xₘ. The conditional independence assumption y ⟂ xₘ | (xₒ, z) leads to two predictive distributions:
- When both modalities are present: p(y|xₒ,xₘ)=∫pθ(y|xₒ,z) pω(z|xₒ,xₘ) dz
- When xₘ is missing: p(y|xₒ)=∫pθ(y|xₒ,z) pω(z|xₒ) dz
Because the integrals are intractable, the authors employ variational inference. For complete examples they define a variational posterior qϕ(z|xₒ,xₘ,y) and a conditional prior pω(z|xₒ,xₘ); for missing‑modality examples they use qϕ(z|xₒ,y) and pω(z|xₒ). Two ELBOs are derived (Equations 2 and 3) and jointly maximized across the whole dataset, allowing the same latent space to be used whether xₘ is observed or not. Importantly, the ELBOs contain no reconstruction term for xₘ; the only supervised signal is the log‑likelihood of y given (xₒ,z). This forces z to align with the discriminative decision boundary rather than merely modeling input distribution, a key distinction from prior VAE‑based multimodal methods.
Parameterization and Training Tricks.
Both the conditional priors pω and the variational posteriors qϕ are diagonal Gaussians whose means and variances are output by amortized neural networks. To avoid the “shift symmetry” where both priors could move together without affecting the KL term, pω(z|xₒ) is anchored to a standard normal N(0,I). The prior for the complete case, pω(z|xₒ,xₘ), is regularized toward pω(z|xₒ) via an explicit KL penalty (Equation 4). Posterior collapse is mitigated by applying batch normalization with a fixed scale γ to the posterior mean, encouraging a non‑trivial KL. The reparameterization trick enables back‑propagation through sampled z.
Inference and Impact Quantification.
At test time, the model draws K samples from the appropriate prior (pω(z|xₒ) if xₘ is missing, otherwise pω(z|xₒ,xₘ)) and averages the predictive probabilities pθ(y|xₒ,z). To measure how much a missing modality could change the prediction, the authors define a variance‑based metric
V = E_{z∼pω}
Comments & Academic Discussion
Loading comments...
Leave a Comment