## Multi-agent LLM Systems: Complexity Doesn't Always Pay Off Multi-agent systems using large language models (LLMs) to reach consensus have become a hot topic. However, a recent study, presented on arXiv, raises doubts about their real added value compared to simpler methods. The research, called DELIBERATIONBENCH, compared three deliberation protocols with a baseline strategy: selecting the best response from a series of outputs generated by a single model. On a sample of 270 questions, evaluated with three independent seeds (for a total of 810 evaluations), a surprising result emerged. The baseline strategy achieved a success rate of 82.5%, significantly outperforming the best deliberation protocol (13.8%). This performance gap, equal to 6x, is statistically significant (p < 0.01) and is accompanied by lower computational costs (1.5-2.5x). These results challenge the idea that greater complexity automatically leads to better results in multi-LLM systems. In the future, it will be crucial to carefully evaluate the costs and benefits of such systems, considering simpler and more efficient alternatives.