This is an interesting new actor in the crowded AI scenario:

LLaMA-Factory lets you train and fine-tune open-source LLMs and VLMs without writing any code.
Here's why it's a game changer for fine-tuning:
โ€ข Fine-tune 100+ LLMs/VLMs with built-in templates (LLaMA, Gemma, Qwen, Mistral, DeepSeek, and more).
โ€ข Zero-code CLI & Web UI for training, inference, merging, and evaluation.
โ€ข Supports full-tuning, LoRA, QLoRA, freeze-tuning, PPO/DPO, OFT, reward modeling, and multi-modal fine-tuning.
โ€ข Speeds up training/inference with FlashAttention-2, RoPE scaling, Liger Kernel, and vLLM backend.
โ€ข Integrates experiment tracking via LlamaBoard, TensorBoard, Weights & Biases, MLflow, and SwanLab.
It's 100% Open Source

I'll test it asap and I'll let you know my opinion.

Thanks to Tech Titans.