LLMs and Optimization: A New Approach
Benchmarking in continuous black-box optimisation is hindered by the limited structural diversity of existing test suites such as BBOB. A recent study explores the use of large language models (LLMs) embedded in an evolutionary loop to design optimisation problems with clearly defined high-level landscape characteristics.
The LLaMEA Framework
Using the LLaMEA framework, an LLM is guided to develop problem code from natural-language descriptions of target properties, including multimodality, separability, basin-size homogeneity, search-space homogeneity and globallocal optima contrast. Inside the loop, candidates are scored through ELA-based property predictors.
Diversification and Validation
An ELA-space fitness-sharing mechanism has been introduced to increase population diversity and steer the generator away from redundant landscapes. A complementary basin-of-attraction analysis, statistical testing and visual inspection verifies that many of the generated functions indeed exhibit the intended structural traits. A t-SNE embedding shows that they expand the BBOB instance space rather than forming an unrelated cluster. The resulting library provides a broad, interpretable, and reproducible set of benchmark problems for landscape analysis and downstream tasks such as automated algorithm selection.
๐ฌ Commenti (0)
๐ Accedi o registrati per commentare gli articoli.
Nessun commento ancora. Sii il primo a commentare!