Verity: Local AI Search for AI PCs

Verity is a new application that offers AI search and answer capabilities similar to Perplexity, but with fully local execution on AI-powered PCs. This approach leverages CPU, GPU, and NPU acceleration for fast and private inference.

The application can be used via command-line interface (CLI) or web interface, depending on the user's preference.

Key Features

  • Fully Local Execution: Verity is designed to run on AI PCs, optimized for Intel AI PCs using OpenVINO (CPU / iGPU / NPU) and Ollama (CPU / CUDA / Metal).
  • Privacy by Design: Search and inference can be fully self-hosted, ensuring data sovereignty.
  • SearXNG-Powered Search: Uses the SearXNG meta search engine, focused on privacy and self-hosting.
  • Fact-Based Answers: Designed to provide accurate and explorable answers.
  • OpenVINO and Ollama Model Support: Compatible with OpenVINO and Ollama models.
  • Modular Architecture: Offers flexibility and customization.
  • CLI, WebUI, and API Server Support: Various interaction options.
  • Jan-nano 4B Model: Powered by the Jan-nano 4B model, but configurable for other models.

For those evaluating on-premise deployments, there are trade-offs to consider. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.