Beyond Parameter Finetuning: Test-Time Representation Refinement for Node Classification

Beyond Parameter Finetuning: Test-Time Representation Refinement for Node Classification
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Graph Neural Networks frequently exhibit significant performance degradation in the out-of-distribution test scenario. While test-time training (TTT) offers a promising solution, existing Parameter Finetuning (PaFT) paradigm suffer from catastrophic forgetting, hindering their real-world applicability. We propose TTReFT, a novel Test-Time Representation FineTuning framework that transitions the adaptation target from model parameters to latent representations. Specifically, TTReFT achieves this through three key innovations: (1) uncertainty-guided node selection for specific interventions, (2) low-rank representation interventions that preserve pre-trained knowledge, and (3) an intervention-aware masked autoencoder that dynamically adjust masking strategy to accommodate the node selection scheme. Theoretically, we establish guarantees for TTReFT in OOD settings. Empirically, extensive experiments across five benchmark datasets demonstrate that TTReFT achieves consistent and superior performance. Our work establishes representation finetuning as a new paradigm for graph TTT, offering both theoretical grounding and immediate practical utility for real-world deployment.


💡 Research Summary

The paper introduces TTReFT, a test‑time adaptation framework for graph neural networks (GNNs) that shifts the focus from updating model parameters to refining node representations. Traditional test‑time training (TTT) methods rely on Parameter Finetuning (PaFT), which updates the entire set of weights during inference on unlabeled target graphs. While PaFT can improve out‑of‑distribution (OOD) performance, it often suffers from catastrophic forgetting of the knowledge acquired on the source domain, leading to unstable or degraded predictions.

Inspired by recent successes of Representation Finetuning (ReFT) in natural language processing, the authors propose to keep all pretrained GNN parameters frozen and instead learn lightweight interventions on a sparse subset of node embeddings. TTReFT consists of three tightly coupled components:

  1. Uncertainty‑Guided Node Selection – For each node in the unlabeled target graph, predictive entropy is computed from the pretrained model’s softmax output. Nodes with high entropy are deemed uncertain and are probabilistically sampled as intervention targets. A smooth sigmoid‑based gating function controls the selection threshold, ensuring that only the most ambiguous nodes are adapted, which reduces computational overhead and avoids unnecessary disturbance of confident predictions.

  2. Low‑Rank Representation Intervention (LoReFT) – For the selected nodes, a low‑rank linear transformation is applied to their hidden representations at designated layers. Concretely, the refined embedding is
    \


Comments & Academic Discussion

Loading comments...

Leave a Comment