GFlowPO: Generative Flow Network as a Language Model Prompt Optimizer

GFlowPO: Generative Flow Network as a Language Model Prompt Optimizer
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Finding effective prompts for language models (LMs) is critical yet notoriously difficult: the prompt space is combinatorially large, rewards are sparse due to expensive target-LM evaluation. Yet, existing RL-based prompt optimizers often rely on on-policy updates and a meta-prompt sampled from a fixed distribution, leading to poor sample efficiency. We propose GFlowPO, a probabilistic prompt optimization framework that casts prompt search as a posterior inference problem over latent prompts regularized by a meta-prompted reference-LM prior. In the first step, we fine-tune a lightweight prompt-LM with an off-policy Generative Flow Network (GFlowNet) objective, using a replay-based training policy that reuses past prompt evaluations to enable sample-efficient exploration. In the second step, we introduce Dynamic Memory Update (DMU), a training-free mechanism that updates the meta-prompt by injecting both (i) diverse prompts from a replay buffer and (ii) top-performing prompts from a small priority queue, thereby progressively concentrating the search process on high-reward regions. Across few-shot text classification, instruction induction benchmarks, and question answering tasks, GFlowPO consistently outperforms recent discrete prompt optimization baselines.


💡 Research Summary

GFlowPO introduces a novel framework for automatic prompt optimization that treats the search for effective prompts as a Bayesian posterior inference problem. The posterior p(z | D, M) ∝ p(D | z)·p_ref(z | M) combines a data‑likelihood term (the performance of a prompt on a small training set D) with a language‑model prior p_ref conditioned on a meta‑prompt M. Because the posterior is highly multimodal and defined over a combinatorial space of discrete token sequences, the authors employ Generative Flow Networks (GFlowNets) to learn an approximate sampler p_θ(z | M) that draws prompts proportionally to an unnormalized reward R(z; M).

In STEP‑A, the reward is defined as R(z; M) = A_D(z)·p_ref(z | M), where A_D(z) is a smoothed accuracy count on D and p_ref is obtained from a reference LM. The GFlowNet is trained off‑policy using the VarGrad (global path‑consistency) loss L(θ; M) = E_{z∼π}


Comments & Academic Discussion

Loading comments...

Leave a Comment