Context Engine is a self-hosted code search system, created to optimize context usage in LLMs (Large Language Models).

Key Features

  • Hybrid Search: Combines dense embeddings with lexical search, AST parsing for symbols and imports, and optional micro-chunking.
  • Compatibility: Works with MCP clients like Cursor, Cline, Windsurf, Claude, and VS Code.
  • Infrastructure: Uses Qdrant for vectors, pluggable embedding models, and reranking. Installation is simplified via Docker Compose.

Motivations

The need arises from the difficulty of managing context in large language models. Often, you end up including entire code repositories, overloading the model, or manually selecting files, risking missing crucial information. Context Engine aims to solve this problem, offering a solution that operates locally, is compatible with different models, and does not require sending code to external services.

The code is available on GitHub for those who want to contribute or simply examine the project more closely.