## Multi-Dimensional Prompt Chaining for Compact Language Models Small language models (SLMs) offer significant deployment advantages but often struggle to match the dialogue quality of larger models, especially in open-domain settings. A new study introduces a multi-dimensional prompt-chaining framework that integrates Naturalness, Coherence, and Engagingness dimensions to enhance dialogue quality. The framework was successfully applied to two SLMs, TinyLlama and Llama-2-7B, and benchmarked against responses generated by substantially larger models, including Llama-2-70B and GPT-3.5 Turbo. Results show that the full framework improves response diversity by up to 29%, contextual coherence by up to 28%, and engagingness as well as naturalness by up to 29%. Notably, Llama-2-7B achieves performance comparable to substantially larger models, including Llama-2-70B and GPT-3.5 Turbo. These findings suggest that carefully designed prompt-based strategies provide an effective and resource-efficient pathway to improving open-domain dialogue quality in SLMs. This opens new perspectives for using smaller models in applications requiring complex and natural interactions. ## Background Large language models (LLMs) have demonstrated impressive capabilities in various tasks, but their high computational cost and resource requirements limit their applicability in some scenarios. The search for methods to improve the performance of smaller models is therefore an area of great interest.