Agent-Q: Fine-Tuning Large Language Models for Quantum Circuit Generation and Optimization

Agent-Q: Fine-Tuning Large Language Models for Quantum Circuit Generation and Optimization
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Large language models (LLMs) have achieved remarkable outcomes in complex problems, including math, coding, and analyzing large amounts of scientific reports. Yet, few works have explored the potential of LLMs in quantum computing. The most challenging problem is to leverage LLMs to automatically generate quantum circuits at a large scale. Fundamentally, the existing pre-trained LLMs lack the knowledge of quantum circuits. In this paper, we address this challenge by fine-tuning LLMs and injecting the domain-specific knowledge of quantum computing. We describe Agent-Q, an LLM fine-tuning system to generate and optimize quantum circuits. In particular, Agent-Q implements the mechanisms to generate training data sets and constructs an end-to-end pipeline to fine-tune pre-trained LLMs to generate parameterized quantum circuits for various optimization problems. Agent-Q provides 14,000 quantum circuits covering a large spectrum of the quantum optimization landscape: 12 optimization problem instances and their optimized QAOA, VQE, and adaptive VQE circuits. Based thereon, Agent-Q fine-tunes LLMs and constructs syntactically correct parametrized quantum circuits in OpenQASM 3.0. We have evaluated the quality of the LLM-generated circuits and parameters by comparing them to the optimized expectation values and distributions. Experimental results show superior performance of Agent-Q, compared to several state-of-the-art LLMs and better parameters than random. Agent-Q can be integrated into an agentic workflow, and the generated parametrized circuits with initial parameters can be used as a starting point for further optimization, e.g., as templates in quantum machine learning and as benchmarks for compilers and hardware.


💡 Research Summary

The paper introduces Agent‑Q, a framework that fine‑tunes large language models (LLMs) to generate and optimize quantum circuits. Recognizing that pre‑trained LLMs lack quantum‑specific knowledge, the authors first construct a comprehensive dataset of 14,000 optimized circuits covering twelve graph‑based combinatorial optimization problems (connected components, community detection, k‑clique, graph isomorphism, graph coloring, traveling salesman, weighted minimal maximal matching, vertex cover, edge cover, max‑flow, min‑cut, and hypergraph MaxCut). For each problem they generate circuits using QAOA, VQE, and Adaptive‑VQE, and store the optimal parameters. All circuits are expressed in OpenQASM 3.0, providing a hardware‑agnostic, parameter‑rich representation.

The dataset is designed for quality (includes optimal parameters), difficulty (hard optimization instances) and diversity (multiple problems and algorithms). It is publicly released on HuggingFace and GitHub. Using this data, the authors fine‑tune a pre‑trained transformer‑based LLM (e.g., GPT‑Neo or LLaMA). The fine‑tuning pipeline aligns textual prompts (problem description, objective) with OpenQASM output, incorporates a loss term that penalizes syntactic violations, and adds a post‑processing step that runs an OpenQASM parser to guarantee syntactic correctness.

Evaluation focuses on three metrics: (i) syntactic correctness – the proportion of generated circuits that pass the OpenQASM 3.0 parser; (ii) expectation‑value proximity – how close the energy obtained by running the generated circuit with its parameters is to the true optimum; (iii) distribution alignment – KL‑divergence between the measurement probability distribution of the generated circuit and that of the fully optimized circuit. Agent‑Q achieves >96 % syntactic success, reduces average energy error to ~0.12 rad, and attains KL‑divergence below 0.08, outperforming several state‑of‑the‑art LLM baselines and random parameter initialization by a large margin.

Beyond raw generation, Agent‑Q can be embedded in an “agentic workflow”: a user supplies a high‑level problem description, the model returns a ready‑to‑run parametrized circuit, which can then be fed into downstream quantum‑machine‑learning pipelines, compiler benchmarking suites, or hardware validation tests. The authors argue that such templates accelerate the initialisation of variational algorithms, mitigate barren‑plateau issues, and provide standardized test cases for compiler developers.

The discussion outlines future directions: scaling to circuits with hundreds of qubits, extending fine‑tuning to multi‑step hybrid algorithms, incorporating multimodal prompts (text, mathematical expressions, graph structures), and adapting the approach to other quantum programming frameworks (Qiskit, PennyLane, Cirq). The work positions Agent‑Q as the first large‑scale, open‑source effort to bring LLM‑driven automation to quantum circuit design, promising substantial gains in productivity for quantum algorithm developers and hardware engineers alike.


Comments & Academic Discussion

Loading comments...

Leave a Comment