Liquid AI has announced the release of LFM2.5-1.2B-Thinking, a language model (LLM) designed for advanced reasoning directly on devices. ## Key Features This model stands out for: * On-device operation, requiring only 900 MB of memory. * Specific training for concise reasoning. * Generation of internal thinking traces to improve the quality of responses. * Effectiveness in problem-solving, tool use, mathematics, and instruction following. * Performance exceeding Qwen3-1.7B in several tests, despite a 40% smaller size in terms of parameters. LFM2.5-1.2B-Thinking offers significant advantages in terms of speed and memory efficiency, surpassing both pure transformer models and hybrid architectures. It is available on Hugging Face and LEAP. Large language models (LLMs) are becoming increasingly accessible, thanks also to optimization for execution on edge devices. This opens up new opportunities for artificial intelligence applications that require low latency and data privacy.