## Faster Finetuning for Embedding Models Unsloth has announced support for finetuning embedding models, promising significant improvements in speed and memory usage. The integration with Sentence Transformers allows customizing models for tasks such as RAG (Retrieval-Augmented Generation) and semantic similarity. ## Technical Details According to the announcement, Unsloth offers 1.8x to 3.3x faster finetuning with 20% less VRAM consumption. EmbeddingGemma, Qwen3 Embedding, and other models are supported. Six notebooks have been made available showing how to customize the models for different use cases. The new version is compatible with Transformers v5. Finetuning is a crucial process in machine learning, as it allows adapting a pre-trained model to a specific task. This approach is particularly useful when a limited dataset is available or when it is desired to improve the model's performance in a specific domain. Unsloth's announcement could simplify and speed up this process for embedding models.