InclusionAI has announced the release of Ling-2.5-1T, a large language model (LLM) designed to improve accessibility and performance in the field of artificial intelligence.

Key Features of Ling-2.5-1T

  • Model size: Ling-2.5-1T boasts 1 trillion total parameters, with 63 billion active parameters.
  • Training corpus: The model was trained on a vast corpus of 29 trillion tokens, expanded compared to the previous generation.
  • Architecture: It uses a hybrid linear attention architecture, optimized for high processing speed and handling contexts up to 1 million tokens.
  • Efficiency: The introduction of a composite reward mechanism, combining "Correctness" and "Process Redundancy", aims to improve the balance between efficiency and performance.
  • Alignment: Refined alignment strategies, such as bidirectional RL feedback and agent-based constraint verification, improve performance in tasks such as creative writing and instruction following.
  • Agent Compatibility: Trained with Agentic RL in interactive environments, Ling-2.5-1T is compatible with agent platforms such as Claude Code, OpenCode, and OpenClaw.

For those evaluating on-premise deployments, there are trade-offs to consider. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.