GEGO: A Hybrid Golden Eagle and Genetic Optimization Algorithm for Efficient Hyperparameter Tuning in Resource-Constrained Environments
Hyperparameter tuning is a critical yet computationally expensive step in training neural networks, particularly when the search space is high dimensional and nonconvex. Metaheuristic optimization algorithms are often used for this purpose due to their derivative free nature and robustness against local optima. In this work, we propose Golden Eagle Genetic Optimization (GEGO), a hybrid metaheuristic that integrates the population movement strategy of Golden Eagle Optimization with the genetic operators of selection, crossover, and mutation. The main novelty of GEGO lies in embedding genetic operators directly into the iterative search process of GEO, rather than applying them as a separate evolutionary stage. This design improves population diversity during search and reduces premature convergence while preserving the exploration behavior of GEO. GEGO is evaluated on standard unimodal, multimodal, and composite benchmark functions from the CEC2017 suite, where it consistently outperforms its constituent algorithms and several classical metaheuristics in terms of solution quality and robustness. The algorithm is further applied to hyperparameter tuning of artificial neural networks on the MNIST dataset, where GEGO achieves improved classification accuracy and more stable convergence compared to GEO and GA. These results indicate that GEGO provides a balanced exploration-exploitation tradeoff and is well suited for hyperparameter optimization under constrained computational settings.
💡 Research Summary
The paper introduces GEGO (Golden Eagle Genetic Optimization), a hybrid meta‑heuristic that tightly integrates the movement dynamics of Golden Eagle Optimization (GEO) with the evolutionary operators of a Genetic Algorithm (GA). GEO, a swarm‑intelligence method inspired by the hunting behavior of golden eagles, updates each agent’s position using an attack vector directed toward a randomly selected “prey” and a cruise vector that encourages lateral exploration. While GEO excels at global exploration, its population diversity tends to diminish in later iterations, leading to premature convergence. GA, on the other hand, maintains diversity through selection, crossover, and mutation but lacks a guided movement toward promising regions.
GEGO addresses this imbalance by embedding GA operators directly into the GEO iterative loop. After a predefined number of GEO updates (the “genetic interval”), the continuous position vectors of all agents are temporarily encoded into chromosome representations (binary or real‑valued). The entire population then undergoes crossover and mutation, after which the chromosomes are decoded back to continuous positions. If a newly generated individual shows a better fitness than its parent, it replaces the parent in the population memory. This genetic phase does not replace GEO’s dynamics; instead, it supplements them, periodically injecting diversity without disrupting the directed search. The authors emphasize that the whole population, not just elite individuals, participates in the genetic step, which is intended to keep global diversity high throughout the run.
Key algorithmic parameters include the genetic interval (how often the GA step is triggered) and the crossover/mutation probabilities. In the experiments, intervals of 10–20 % of the total iterations, crossover rates of 0.7–0.9, and mutation rates of 0.01–0.05 yielded the best trade‑off between exploration and exploitation.
Performance evaluation consists of two parts. First, GEGO is benchmarked on the CEC2017 suite (30‑ and 50‑dimensional unimodal, multimodal, and composite functions, 30 problems total). Each problem is solved 30 times independently. Results show that GEGO outperforms pure GEO and pure GA in terms of average best‑found value, standard deviation, and success rate, especially on multimodal and composite functions where premature convergence is most problematic.
Second, the algorithm is applied to hyper‑parameter tuning of neural networks on the MNIST digit classification task. The authors tune learning rate, batch size, number of hidden layers, activation functions, and other typical settings for a multilayer perceptron and a simple convolutional network. Under a strict budget of 200 fitness evaluations, GEGO achieves an average test accuracy of 98.4 %, compared with 97.6 % for GEO and 97.2 % for GA. Moreover, the learning curves exhibit lower variance, indicating more stable convergence.
The main contributions are: (1) a novel, tightly coupled hybridization of a swarm‑based optimizer and an evolutionary algorithm, rather than a sequential or loosely coupled combination; (2) a population‑wide genetic update that continuously refreshes diversity; and (3) demonstration that the hybrid method remains effective under limited computational resources, a scenario common in edge or embedded AI deployments.
Limitations acknowledged by the authors include the overhead of encoding/decoding, the need to pre‑select genetic interval and operator probabilities, and the relatively modest scale of the experimental problems (CEC benchmarks and MNIST). Future work is suggested in adaptive interval scheduling, multi‑objective extensions, and validation on larger deep learning models (e.g., ResNet, Transformers) or real‑time cloud/edge environments.
Overall, GEGO offers a balanced exploration‑exploitation mechanism that leverages the directed movement of GEO while preserving the diversity benefits of GA, making it a promising tool for hyper‑parameter optimization when computational budgets are tight.
Comments & Academic Discussion
Loading comments...
Leave a Comment