Combinatorial Optimization and LLMs: The DASH Framework

The use of Large Language Models (LLMs) is revolutionizing the field of combinatorial optimization, enabling the automated generation of heuristics. This approach, known as LLM-Driven Heuristic Design (LHD), allows for the iterative development and improvement of solvers to achieve high performance.

However, existing LHD frameworks have limitations. In particular, the evaluation of solvers is based solely on the final quality of the solution, ignoring the convergence process and runtime efficiency. Furthermore, adaptation to new instance groups can be costly.

To overcome these limitations, Dynamics-Aware Solver Heuristics (DASH) has been proposed, a framework that co-optimizes solver search mechanisms and runtime schedules, guided by a convergence-aware metric. DASH identifies efficient and high-performance solvers in this way. In addition, to reduce re-adaptation costs, DASH incorporates Profiled Library Retrieval (PLR), which efficiently archives specialized solvers during the evolutionary process, enabling convenient warm-starts for heterogeneous distributions.

Experimental results on four combinatorial optimization problems demonstrate that DASH improves runtime efficiency by over 3 times, while surpassing the solution quality of state-of-the-art solutions across different problem scales. Furthermore, by enabling profile-based warm starts, DASH maintains superior accuracy across different distributions, reducing LLM adaptation costs by over 90%.

For those evaluating on-premise deployments, there are trade-offs to consider. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.