📁 LLM

The LLM archive monitors model releases, quantization updates, reasoning capabilities, and real-world deployment implications for local and hybrid AI. We focus on what materially changes selection and operations: context windows, latency, memory footprint, licensing, and evaluation evidence across open and commercial families. This section is designed for teams that need dependable model intelligence, not hype cycles. Pair these updates with the LLM pillar and references to hardware constraints and framework integration.

AI's early-2025 spending spree featured massive raises and trillion-dollar infrastructure promises. By year’s end, hype gave way to a vibe check, with growing scrutiny over sustainability, safety, and business models.

2025-12-29 Fonte

Meta has announced significant developments for its large language models, marking an important step towards creating more intelligent and adaptive systems.

2025-12-29 Fonte

A new approach to psychological analysis is being explored using large language models like Llama. This involves the use of multi-agent collaboration, cosine similarity, and computational psychology to enhance artificial intelligence.

2025-12-29 Fonte

Large reasoning models (LRMs) have been developed using reinforcement learning with verifiable rewards (RLVR) to enhance their reasoning abilities. A new study has explored how different sample polarities affect RLVR training dynamics and behaviors. The results show that positive samples sharpen existing correct reasoning patterns, while negative samples encourage exploration of new reasoning paths. The work proposes a new token-level Advantage shaping method, A3PO, which improves the precision of advantage signals to key tokens across different polarities.

2025-12-29 Fonte