Setup Protocol: INITIATED

Complete each task in sequence. Tasks unlock automatically as you progress.

SETUP PROGRESS
0/6 TASKS COMPLETED
0%
1
LOCKED

GPU Selection

Verify your NVIDIA GPU meets minimum requirements for local LLM hosting.

Requirements:

  • Minimum: RTX 4070 (12GB VRAM)
  • Recommended: RTX 4090 (24GB VRAM)
  • Enterprise: A6000/A100 (48GB VRAM)

Verify Installation:

nvidia-smi

Driver version 535+ recommended for CUDA 12 compatibility.

2
LOCKED

Docker & Compose

Install Docker and Docker Compose for container management.

Windows Installation:

# Download Docker Desktop from
# https://www.docker.com/products/docker-desktop/

# Verify installation
docker --version
docker-compose --version

Linux Installation:

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER

Windows: Enable WSL2 backend for best performance.

3
LOCKED

Ollama Service

Setup Ollama for local LLM model management and API access.

Installation:

# Download from https://ollama.ai

# Verify installation
ollama --version

# Pull recommended models
ollama pull llama3.2
ollama pull mistral
ollama pull nomic-embed-text

Start Service:

ollama serve

Ollama runs on port 11434 by default.

4
LOCKED

Vector Store

Install ChromaDB for semantic search and RAG capabilities.

Installation:

pip install chromadb>=0.4.0

Verify:

python -c "import chromadb; print(chromadb.__version__)"

ChromaDB creates a local directory for persistent storage.

5
LOCKED

Python Environment

Setup Python 3.9+ and install all required dependencies.

Install Dependencies:

pip install -r requirements.txt

Verify Python Version:

python --version

Python 3.9 or higher required.

6
LOCKED

FastAPI Backend

Configure and launch the AI-Radar application backend.

Environment Configuration:

OLLAMA_BASE_URL=http://localhost:11434
LLM_MODEL=llama3.2
EMBEDDING_MODEL=nomic-embed-text
CHROMA_PERSIST_DIR=./chroma_store
DATABASE_URL=sqlite:///./ai_radar.db

Start Application:

# Development
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

# Production (Docker)
docker-compose up -d

SUCCESS: Access at http://localhost:8000

> ADVANCED_GUIDES

Deep-dive documentation for complex scenarios, optimization, and production deployments.

> NO_GUIDES_FOUND

No advanced guides available yet. Complete the setup wizard above to get started.

< AI-RADAR MAIN