AI-Radar โ€“ Independent observatory covering AI models, LLMs, local AI, hardware, and trends

AI-Radar for on-prem LLMs & Home AI

The daily radar on models, frameworks, and hardware to run AI locally. LLMs, LangChain, Chroma, mini-PCs, and everything you need for a distributed "in-house" brain.

โš™๏ธ Stack: Local LLMs ยท LangChain ยท Transformers ยท ChromaDB ยท MiniPCs ยท AI boxes
๐Ÿ›ฐ๏ธ Ask Observatory (Q&A + RAG) connected to the article archive.

โšก Trending Now

View All โ†’

Latest Analysis & Radar News

AI-generated articles from feeds, with space for human editorial layer above the raw content.

ThoughtSpot: Agenti AI per analisi dati e decisioni
๐Ÿ“ Market AI generated โ„น๏ธ AI News

ThoughtSpot: AI Agents for Data Analysis and Decisions

ThoughtSpot introduces a new generation of AI agents for data analysis, aiming to transform business intelligence from passive to active. These agents continuously monitor data, diagnose changes, and automate subsequent actions. The company emphasizes the importance of a solid semantic layer to interpret business context and ensure reliable decisions, promoting a 'decision intelligence' architecture with complete traceability.

2026-02-02 ๐Ÿ“ฐ Source
Dispositivi AI per la trascrizione automatica di meeting
๐Ÿ“ Hardware AI generated โœ… TechCrunch AI

AI Notetaking Devices for Automatic Meeting Transcription

New physical devices use artificial intelligence to transcribe audio in real time, generating summaries and identifying action items during meetings. Some models also offer live translation, improving productivity and accessibility.

2026-02-02 ๐Ÿ“ฐ Source
Constructor Capital lancia un fondo da 110 milioni $ per startup scientifiche
๐Ÿ“ Market AI generated โ„น๏ธ Tech.eu

Constructor Capital closes $110M Fund I for science-first founders

Constructor Capital, the Swiss venture arm of the Constructor Group, has closed its first fund at $110 million. It aims to invest in seed and Series A startups across deeptech, software, and edtech, with a focus on companies with strong scientific and technological foundations. The fund operates globally, with investments ranging from $1 million to $15 million.

2026-02-02 ๐Ÿ“ฐ Source
Incard raccoglie 10 milioni di sterline per espandere la piattaforma finanziaria
๐Ÿ“ Market AI generated โ„น๏ธ Tech.eu

Incard closes ยฃ10M Series A to expand its financial platform

Incard, a financial platform for high-growth digital companies, has raised ยฃ10 million in Series A funding. The company plans to expand into new markets, enhance its product offering, and invest in automation, AI-driven financial workflows, and team growth.

2026-02-02 ๐Ÿ“ฐ Source
Biorce raccoglie 52 milioni $ per la sua piattaforma AI di trial clinici
๐Ÿ“ Market AI generated โ„น๏ธ Tech.eu

Biorce raises $52M to support global rollout of its AI clinical trial platform

Biorce, a health AI company focused on clinical trial design and execution, has closed a $52 million Series A round. The company will use the funds to expand its Aika platform, based on a dataset of approximately one million clinical trials, with the aim of optimizing the design and execution of clinical trials and reducing the development time of new therapies.

2026-02-02 ๐Ÿ“ฐ Source
ByteDance accelera su Doubao nell'affollata arena dell'IA
๐Ÿ“ Market AI generated โœ… DigiTimes

ByteDance races to capture closing AI window as Doubao scales up

ByteDance intensifies its efforts in the field of artificial intelligence with Doubao, in a context of increasing competition. The company aims to consolidate its position in the market, taking advantage of the opportunities offered by the current technological landscape.

2026-02-02 ๐Ÿ“ฐ Source
ASE incrementa il packaging avanzato a Taiwan, SPIL espande l'hub centrale
๐Ÿ“ Market AI generated โœ… DigiTimes

ASE ramps advanced packaging in Taiwan as SPIL scales central hub

Advanced Semiconductor Engineering (ASE) is ramping up advanced packaging capabilities in Taiwan. Concurrently, Silicio Precision Industries Co., Ltd. (SPIL) is scaling its central hub. These initiatives strengthen the local ecosystem for semiconductor manufacturing.

2026-02-02 ๐Ÿ“ฐ Source
MrRoPE: Un approccio unificato per estendere la finestra di contesto dei LLM
๐Ÿ“ LLM AI generated ๐Ÿ† ArXiv cs.CL

MrRoPE: A Unified Approach to Extend LLM Context Window

A new study introduces MrRoPE, a generalized formulation for extending the context window of large language models (LLMs) based on a radix system conversion perspective. This approach unifies various existing strategies and introduces two training-free extensions, MrRoPE-Uni and MrRoPE-Pro, which improve 'train short, test long' generalization capabilities.

2026-02-02 ๐Ÿ“ฐ Source
LLM: l'influenza dell'alterazione linguistica sulla sicurezza
๐Ÿ“ LLM AI generated ๐Ÿ† ArXiv cs.CL

LLM Safety: Examining the Impact of Drunk Language Inducement

A new study explores how altering language, simulating a state of intoxication, can compromise the safety of large language models (LLMs). Through various induction techniques, researchers observed increased vulnerability to jailbreaking and privacy leaks, highlighting significant risks to the reliability of LLMs.

2026-02-02 ๐Ÿ“ฐ Source
Riconoscimento emozioni: conoscenza del dominio batte i Transformer
๐Ÿ“ LLM AI generated ๐Ÿ† ArXiv cs.LG

Emotion Recognition: Domain Knowledge Outperforms Transformers

A study on the EAV dataset reveals that, for multimodal emotion recognition on small datasets, complex attention mechanisms (Transformers) underperform compared to modifications based on domain knowledge. Adding delta MFCCs to the audio CNN improves accuracy, as does using frequency-domain features for EEG.

2026-02-02 ๐Ÿ“ฐ Source
Six Sigma Agent: Affidabilitร  enterprise per LLM tramite consenso
๐Ÿ“ LLM AI generated ๐Ÿ† ArXiv cs.AI

The Six Sigma Agent: Achieving Enterprise-Grade Reliability in LLM Systems Through Consensus

A new study introduces the Six Sigma Agent, an architecture to improve the reliability of large language models (LLMs) in enterprise settings. The approach is based on task decomposition, parallel execution across diverse LLMs, and a consensus voting mechanism to select the most accurate answer, drastically reducing the error rate.

2026-02-02 ๐Ÿ“ฐ Source
JAF: Un Foresta di Agenti Giudici per il Raffinamento dell'AI
๐Ÿ“ Frameworks AI generated ๐Ÿ† ArXiv cs.AI

JAF: Judge Agent Forest for AI Refinement

JAF (Judge Agent Forest) is a framework that uses judge agents to evaluate and iteratively improve the reasoning processes of AI agents. JAF jointly analyzes groups of queries and responses, identifying patterns and inconsistencies to provide collective feedback, allowing the primary agent to improve its deliveries. A locality-sensitive hashing (LSH) algorithm selects relevant examples, optimizing the exploration of reasoning paths.

2026-02-02 ๐Ÿ“ฐ Source
LLM e richieste inattese: quando l'AI risponde fuori dagli schemi
๐Ÿ“ LLM AI generated โ„น๏ธ LocalLLaMA

LLM and unexpected requests: when AI responds outside the box

A Reddit post showcases an unexpected response from a large language model (LLM) to an initial request without a system prompt. The example highlights the difficulty of predicting LLM outputs in unstructured contexts and without preliminary instructions.

2026-02-02 ๐Ÿ“ฐ Source
Step-3.5-Flash: performance superiore con meno parametri
๐Ÿ“ LLM AI generated โ„น๏ธ LocalLLaMA

Step-3.5-Flash: outperforms with fewer parameters

The Step-3.5-Flash model, with a reduced active parameter architecture (11B out of 196B total), demonstrates superior performance compared to DeepSeek v3.2 in coding and agent benchmarks. DeepSeek v3.2 uses an architecture with many more active parameters (37B out of 671B total). The model is available on Hugging Face.

2026-02-02 ๐Ÿ“ฐ Source
Linux 6.19: Rilascio stabile posticipato a causa delle festivitร 
๐Ÿ“ Altro AI generated โœ… Phoronix

Linux 6.19: Stable Release Delayed Due to Holidays

The stable release of the Linux 6.19 kernel has been delayed by a week due to the year-end holiday period. Version 6.19-rc8 is now available, with the stable version expected next week. This delay is not due to critical bugs, but rather the need to manage downtime during the holidays.

2026-02-01 ๐Ÿ“ฐ Source
AIDA: piattaforma di pentesting con controllo AI e 400+ tool
๐Ÿ“ Frameworks AI generated โ„น๏ธ LocalLLaMA

AIDA: Pentesting platform with AI control and 400+ tools

A developer has created AIDA, an open-source pentesting platform that allows an AI agent to control over 400 security tools. The AI can execute tools, chain attacks, and document findings, all through a Docker container and a web dashboard.

2026-02-01 ๐Ÿ“ฐ Source
Licenziamenti nel settore AI: una scusa per ridimensionarsi?
๐Ÿ“ Market AI generated โœ… TechCrunch AI

AI layoffs or โ€˜AI-washingโ€™?

Many companies are announcing layoffs, citing the adoption of artificial intelligence systems as the reason. The article questions whether this is a valid justification or simply a pretext for reducing staff and operating costs, exploiting the current hype.

2026-02-01 ๐Ÿ“ฐ Source
Mistral AI annuncia Vibe 2.0: cosa sappiamo
๐Ÿ“ LLM AI generated โ„น๏ธ LocalLLaMA

Mistral AI announces Vibe 2.0: what we know

Mistral AI has announced Mistral Vibe 2.0. The news was shared via Reddit, where users posted a link to the official announcement. Currently, no further details are available regarding the features or improvements of this new version. The community's attention is high, awaiting more in-depth information.

2026-02-01 ๐Ÿ“ฐ Source
Musk unisce le forze: nasce un nuovo conglomerato tecnicico?
๐Ÿ“ Market AI generated โœ… TechCrunch AI

Musk merges forces: Is a new tech conglomerate born?

Elon Musk's integration of SpaceX, xAI, and Tesla evokes comparisons with the large conglomerates of the past, such as General Electric, or even with the robber barons of the Gilded Age. It remains to be seen whether this move will lead to innovative synergies or new management challenges.

2026-02-01 ๐Ÿ“ฐ Source
SpaceX progetta costellazione di satelliti per data center AI orbitali
๐Ÿ“ Altro AI generated โ„น๏ธ Tom's Hardware

SpaceX plans satellite constellation for orbital AI data centers

SpaceX has filed with the FCC a plan to build an orbital data center system based on a constellation of one million satellites. The infrastructure, designed for artificial intelligence workloads, is expected to deliver up to 100 gigawatts of compute capacity.

2026-02-01 ๐Ÿ“ฐ Source
GNOME Resources 1.10 monitora le NPU AMD Ryzen AI
๐Ÿ“ Hardware AI generated โœ… Phoronix

GNOME Resources 1.10 Adds Monitoring Support For AMD Ryzen AI NPUs

GNOME Resources 1.10, the newest version of this system monitoring app, introduces monitoring support for AMD Ryzen AI NPUs. This application is now used by default on distributions like the upcoming Ubuntu 26.04 LTS. The update also includes other unspecified new capabilities.

2026-02-01 ๐Ÿ“ฐ Source
G2 acquisisce Capterra e consolida il mercato del software B2B
๐Ÿ“ Market AI generated โ„น๏ธ The Next Web

G2 acquires Capterra and consolidates the B2B software market

G2, a software review platform, is set to acquire Capterra, Software Advice, and GetApp from Gartner. The deal, expected to close in Q1 2026, will consolidate several B2B software discovery platforms under a single owner, reaching a large user base and a significant volume of verified reviews.

2026-02-01 ๐Ÿ“ฐ Source
OLMO 3.5: in arrivo un modello ibrido per inference LLM efficiente
๐Ÿ“ LLM AI generated โ„น๏ธ LocalLLaMA

OLMO 3.5: Hybrid Model for Efficient LLM Inference Coming Soon

AI2's OLMO 3.5 model combines standard transformer attention with linear attention using Gated Deltanet. This hybrid approach aims to improve efficiency and reduce memory usage while maintaining model quality. The OLMO series is fully open source, from datasets to training recipes.

2026-02-01 ๐Ÿ“ฐ Source
Falcon-H1-Tiny: modelli specializzati da 90M di parametri
๐Ÿ“ LLM AI generated โ„น๏ธ LocalLLaMA

Falcon-H1-Tiny: Specialized Micro-Models at 90M Parameters

TII releases Falcon-H1-Tiny, a series of sub-100M parameter models challenging the scaling dogma. These specialized models exhibit a lower tendency to hallucinate compared to larger, general-purpose models. Specialized variants offer competitive performance in specific tasks like tool calling, reasoning, and code generation, opening new possibilities for inference on resource-constrained devices.

2026-02-01 ๐Ÿ“ฐ Source
Chip flessibile: Shanghai crea processore sottile come un capello
๐Ÿ“ Hardware AI generated โ„น๏ธ Tom's Hardware

Flexible Chip: Shanghai Scientists Create Hair-Thin Processor

Researchers in Shanghai have developed a flexible chip integrated into a fiber thinner than a human hair. The fiber integrates 100,000 transistors per centimeter and can withstand a crushing force of 15.6 tons. This innovation opens new perspectives for flexible and wearable electronics.

2026-02-01 ๐Ÿ“ฐ Source
Affidabilitร  CPU: Intel raggiunge AMD nel report di Puget Systems
๐Ÿ“ Hardware AI generated โ„น๏ธ Tom's Hardware

CPU Reliability: Intel ties AMD in Puget Systems' Report

Puget Systems has released its annual reliability report on hardware components. The 2025 report highlights the components with the fewest failures during testing and deployment. Intel and AMD are neck and neck for CPU reliability, while Nvidia's Founders Edition GPUs show the lowest failure rates.

2026-02-01 ๐Ÿ“ฐ Source
Modelli LLM non censurati disponibili su Hugging Face
๐Ÿ“ LLM AI generated โ„น๏ธ LocalLLaMA

Uncensored LLM Models Available on Hugging Face

An overview of uncensored large language models (LLM) available on the Hugging Face platform. The list includes variants of GLM, GPT OSS, Gemma, and Qwen, with different methods of removing restrictions. The article provides direct links to the models for easy access and experimentation.

2026-02-01 ๐Ÿ“ฐ Source
vLLM-MLX su Apple Silicio: throughput superiore fino all'87%
๐Ÿ“ Hardware AI generated โ„น๏ธ LocalLLaMA

vLLM-MLX on Apple Silicio: Up to 87% Higher Throughput

Recent research compares the performance of vLLM-MLX on Apple Silicio with llama.cpp, highlighting significantly higher throughput. The results suggest potential advantages in using Apple hardware for local inference of large language models (LLMs).

2026-02-01 ๐Ÿ“ฐ Source
Kanade Tokenizer: voice cloning real-time su CPU
๐Ÿ“ Frameworks AI generated โ„น๏ธ LocalLLaMA

Kanade Tokenizer: real-time voice cloning on CPU

A developer has presented Kanade Tokenizer, a voice cloning tool optimized for speed, with a real-time factor exceeding RVC. It also runs on CPU. A fork with a GUI based on Gradio and Tkinter is available.

2026-02-01 ๐Ÿ“ฐ Source
4chan e LLM: dati "sporchi" possono migliorare le consegne?
๐Ÿ“ LLM AI generated โ„น๏ธ LocalLLaMA

Can 4chan data REALLY improve a model? Turns out it can!

An experiment showed how training a language model on a dataset derived from 4chan led to unexpected results. The model, Assistant_Pepe_8B, outperformed NVIDIA's Nemotron base model, despite being trained on data considered to be of lower quality. The results suggest that dataset quality may not be the only determining factor in an LLM's performance.

2026-02-01 ๐Ÿ“ฐ Source
I CSP aumentano il CapEx AI grazie alla supply chain piรน stabile
๐Ÿ“ Market AI generated โœ… DigiTimes

CSPs ramp up AI capex as supply chain gains confidence

Cloud service providers (CSPs) are increasing investments in AI infrastructure, thanks to a more stable supply chain. This increase in CapEx is an indicator of the growing demand for computational resources for artificial intelligence and machine learning.

2026-02-01 ๐Ÿ“ฐ Source
Database Moltbook Esposto: Controllo Totale degli Agenti AI
๐Ÿ“ Altro AI generated โ„น๏ธ LocalLLaMA

Exposed Moltbook Database: Full Control of AI Agents

A vulnerability in the Moltbook database allowed anyone to take control of the AI agents on the platform. The incident raises serious concerns about security and access management in systems that orchestrate the execution of large language models.

2026-02-01 ๐Ÿ“ฐ Source
NanoChat: superare GPT-2 con meno di 100 dollari
๐Ÿ“ LLM AI generated โ„น๏ธ LocalLLaMA

NanoChat: Beating GPT-2 for Under $100

Andrej Karpathy demonstrated how to surpass GPT-2's performance with a model called NanoChat, trained in just three hours on 8 H100 GPUs. The project includes details on the architecture, optimizers used, data setup, and a script for reproducing the results.

2026-02-01 ๐Ÿ“ฐ Source
← Previous Page 17 / 54 Next →
View Full Archive ๐Ÿ—„๏ธ

AI-Radar is an independent observatory covering AI models, local LLMs, on-premise deployments, hardware, and emerging trends. We provide daily analysis and editorial coverage for developers, engineers, and organizations exploring local AI solutions.

AI Radar - Get daily AI insights on models, frameworks, and local LLMs | BetaList