Decoding the
Local AI Revolution
AI-Radar is an independent Italian observatory, founded in 2025, dedicated to on-premise Large Language Models, local AI hardware, open-source frameworks, and the tools that let individuals and businesses run AI without sending data to the cloud.
Our Mission
We believe access to artificial intelligence should not depend on cloud subscriptions, data-sharing agreements, or expensive API contracts. Our mission is to make on-premise AI accessible — for the developer running Mistral on a mini-PC at home, the startup that can't afford to send customer data offshore, and the enterprise building sovereign AI infrastructure.
Every article, benchmark, and guide on AI-Radar is produced with this principle in mind: practical, privacy-respecting AI that runs on hardware you own.
Privacy First
Your data stays on your hardware. We cover models that never phone home.
Real Benchmarks
Measurements on consumer GPUs. No theoretical TFLOPS — just tokens per second on actual hardware.
Open Source Focus
We prioritise open weights, open tools, and community-validated approaches over proprietary black boxes.
Editorial Methodology
AI-Radar publishes two types of content. AI-generated articles are produced by an automated pipeline that monitors 14 RSS sources (ArsTechnica, The Next Web, MIT Technology Review, and others), extracts key information, and generates bilingual Italian/English summaries via a locally-run Ollama LLM (no data sent to external APIs). These articles are clearly labelled "AI generated".
Editorial articles (labelled "Editoriale") are written by our human editorial team and reflect original analysis, opinion, and in-depth technical coverage that automated tools cannot produce. Our credibility scoring system evaluates source reliability on a 0–100 scale and is displayed on every article card.
We use a semantic similarity pipeline (ChromaDB + sentence-transformers) to deduplicate coverage and surface the most relevant articles for each topic. Our Ask Observatory feature lets readers query the full article archive using retrieval-augmented generation (RAG), all running on our own infrastructure.
What We Cover
LLMs & Models
Llama, Mistral, Gemma, Phi, Qwen, DeepSeek and the open-weights ecosystem
Hardware & Inference
GPUs, NPUs, mini-PCs, and edge AI boxes for local inference
Frameworks
LangChain, LlamaIndex, Ollama, vLLM, llama.cpp and deployment tooling
Enterprise AI
On-premise deployments, governance, compliance, and sovereign AI
Market & Trends
Funding, acquisitions, benchmarks, and the competitive landscape
The Team
AI-Radar is founded and operated by an Italian team of AI practitioners and software engineers with hands-on experience in deploying LLMs in production — including regulated environments such as manufacturing, pharma, and enterprise IT. The editorial team has direct experience with Ollama, vLLM, LangChain, and the hardware required to run capable models at the edge.
Our community has grown to over 160 members from Italy and internationally, sharing experience reports, hardware setups, and benchmarks. We read every message sent to editorial@ai-radar.it.