Advancing Automated Algorithm Design via Evolutionary Stagewise Design with LLMs
With the rapid advancement of human science and technology, problems in industrial scenarios are becoming increasingly challenging, bringing significant challenges to traditional algorithm design. Automated algorithm design with LLMs emerges as a promising solution, but the currently adopted black-box modeling deprives LLMs of any awareness of the intrinsic mechanism of the target problem, leading to hallucinated designs. In this paper, we introduce Evolutionary Stagewise Algorithm Design (EvoStage), a novel evolutionary paradigm that bridges the gap between the rigorous demands of industrial-scale algorithm design and the LLM-based algorithm design methods. Drawing inspiration from CoT, EvoStage decomposes the algorithm design process into sequential, manageable stages and integrates real-time intermediate feedback to iteratively refine algorithm design directions. To further reduce the algorithm design space and avoid falling into local optima, we introduce a multi-agent system and a “global-local perspective” mechanism. We apply EvoStage to the design of two types of common optimizers: designing parameter configuration schedules of the Adam optimizer for chip placement, and designing acquisition functions of Bayesian optimization for black-box optimization. Experimental results across open-source benchmarks demonstrate that EvoStage outperforms human-expert designs and existing LLM-based methods within only a couple of evolution steps, even achieving the historically state-of-the-art half-perimeter wire-length results on every tested chip case. Furthermore, when deployed on a commercial-grade 3D chip placement tool, EvoStage significantly surpasses the original performance metrics, achieving record-breaking efficiency. We hope EvoStage can significantly advance automated algorithm design in the real world, helping elevate human productivity.
💡 Research Summary
The paper introduces Evolutionary Stagewise Algorithm Design (EvoStage), a novel framework that combines large language models (LLMs) with evolutionary optimization to automate algorithm design for industrial‑scale problems. Existing LLM‑based design methods treat the algorithm as a black box, providing only a final performance score as feedback. This approach deprives the model of any insight into the underlying problem mechanics, leading to hallucinated designs and excessive evaluation costs—both unacceptable in real‑world settings where evaluations are expensive and data are scarce.
EvoStage addresses these limitations through four key components. First, an evolutionary population maintains a set of candidate algorithms; selection supplies the best current designs as references, focusing the LLM on the most relevant information. Second, the core “stagewise design” paradigm automatically decomposes each algorithm into K sequential stages. After executing a stage, the system collects intermediate execution information (I₁…I_K) and feeds it back to the LLM in real time. This mirrors chain‑of‑thought reasoning, allowing the model to correct its trajectory at each step rather than only after a full run.
Third, a multi‑agent architecture assigns a dedicated “coder” LLM to each algorithm component (e.g., learning‑rate schedule, acquisition‑function) while a “coordinator” LLM reflects on the accumulated stage information and issues guidance for the next stage. By partitioning the design space into component‑level subspaces, the system dramatically reduces combinatorial explosion and enforces consistency across components.
Fourth, EvoStage introduces a “global‑local perspective” mechanism inspired by fast‑and‑slow thinking. The local perspective operator performs the stagewise design described above, fine‑tuning each component step by step. The global perspective operators act in a single shot: (a) Global‑Explore prompts the coders to generate entirely new multi‑stage heuristics using a selected reference, injecting radical diversity; (b) Global‑Enhance asks the coders to improve an existing heuristic in one go, accelerating convergence. This dual‑track strategy balances thorough local refinement with occasional global jumps, preventing premature convergence to local optima.
The framework is evaluated on two representative optimizer‑design tasks. In the first, EvoStage automatically creates learning‑rate and step‑schedule policies for the Adam optimizer used in global placement (GP) of VLSI chips. Across two open‑source benchmarks, the method surpasses the historically best GP results (e.g., DREAMPlace, Xplace) after only 25 evaluations, outperforming both human‑engineered schedules and prior LLM‑based systems (AlphaEvolve, EoH). When integrated into a commercial 3‑D placement tool, EvoStage reduces half‑perimeter wire‑length (HPWL) on the logic die by 9.24 % and cuts the number of optimization iterations by 52.21 %, demonstrating immediate industrial impact.
In the second task, EvoStage designs acquisition functions for Bayesian optimization (BO). Tested on synthetic functions with diverse landscapes and on neural‑architecture‑search benchmarks, the generated functions consistently beat classic expert‑crafted acquisitions (EI, PI, UCB) and recent LLM‑generated alternatives, achieving an average improvement of 4.7 % in optimization performance.
Compared with earlier evolutionary LLM approaches, EvoStage’s advantages are threefold: (1) real‑time intermediate feedback mitigates hallucination and accelerates learning; (2) the multi‑agent, component‑wise decomposition shrinks the search space and enforces design coherence; (3) the global‑local perspective provides both fine‑grained exploitation and occasional exploratory leaps, yielding rapid convergence under tight evaluation budgets.
The authors conclude that EvoStage bridges the gap between LLM capabilities and the stringent demands of industrial algorithm design. By aligning LLM reasoning with stagewise problem decomposition and by leveraging evolutionary dynamics, the framework can autonomously produce high‑quality algorithms that surpass human experts in both speed and quality. Future work will explore extending the paradigm to more complex multi‑stage pipelines (e.g., hyper‑parameter tuning workflows), incorporating meta‑learning to further reduce evaluation cost, and applying the method to other high‑impact domains such as drug discovery and materials design. Ultimately, EvoStage promises to democratize expert‑level algorithm design, dramatically boosting productivity across a wide range of real‑world applications.
Comments & Academic Discussion
Loading comments...
Leave a Comment