Gemma 3 Fine-tuning for Reasoning and Freedom

DavidAU has announced the release of 20 fine-tuned models based on the Gemma 3 architecture, ranging from 1 billion to 27 billion parameters. These models were trained using datasets derived from GLM 4.7 Flash, GPT, Claude, and Gemini, with a particular focus on improving reasoning capabilities.

A distinctive aspect of these models is the 'Heretic' process, which removes censorship restrictions. According to DavidAU, this step, performed before optimization, leads to a significant improvement in performance.

The models are available on Hugging Face at https://huggingface.co/collections/DavidAU/gemma-3-reasoning-thinking-models-incl-uncensored.