Local AI Workspace with Rust, Tauri, and SQLite

A developer has introduced Tandem, an open-source AI workspace designed to operate entirely locally. The application is built with Rust (Tauri v2) for the backend, React + Vite for the frontend, and uses sqlite-vec for storing embedding vectors directly in the SQLite file, eliminating the need for separate Docker containers for Qdrant or Chroma.

Architecture and Features

Tandem supports local Llama models via Ollama and OpenAI-compatible servers like LM Studio or vLLM. The system automatically detects available models (Llama 3, Mistral, Gemma), allowing switching between them without complex configurations. Key features include native support for local models, zero telemetry, and implementation of the Model Context Protocol (MCP) for integration with local tools. There is also a "Packs" system for installing prompts and skills via configuration files.

For those evaluating on-premise deployments, there are trade-offs to consider carefully. AI-RADAR offers analytical frameworks on /llm-onpremise to support these evaluations.