A new large language model (LLM) called Qwen3-Coder-Next REAP, with a size of 48 billion parameters, has been released. A key feature of this release is its conversion to the GGUF (GPT-Generated Unified Format) format.

Model Details

The model is available on Hugging Face. The conversion to GGUF format makes the model more easily usable on different hardware platforms, expanding its potential user base. The model has undergone a 40% REAP process (presumably a method of optimizing or improving performance).

For those evaluating on-premise deployments, there are trade-offs in terms of hardware requirements and performance. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.