Automated optimization modeling via Large Language Models (LLMs) is emerging as a promising approach to assist complex human decision-making. A new study introduces MIND (auto\textbf{m}ated opt\textbf{i}mization modeli\textbf{n}g via a localizable error-\textbf{d}riven perspective), an error-driven learning framework.

Current Issues

The research highlights two fundamental limitations in existing approaches:

  • The sparsity of error-specific problems.
  • The sparse rewards associated with difficult problems.

MIND tackles these limitations by customizing the whole model training framework, from data synthesis to post-training. The approach is based on the observation of unique localizable patterns in error propagation of optimization modelings. Errors tend to remain localized to specific semantic segments and do not propagate throughout the entire solution.

The MIND Approach

MIND leverages the construction of a focused, high-density training corpus and proposes \textbf{D}ynamic Supervised \textbf{F}ine-Tuning \textbf{P}olicy \textbf{O}ptimization (DFPO) to tackle difficult problems through localized refinement. Experiments on six benchmarks demonstrate that MIND consistently outperforms all the state-of-the-art automated optimization modeling approaches.