The UK is tightening its laws against the generation and request of explicit content via AI, making it a crime. The communications regulator, Ofcom, has launched a formal investigation into Grok to verify compliance with user protection regulations. The crackdown follows the ban on sharing deepfakes.
Elon Musk aims to monetize Grok, X's image generator, despite its ability to create non-consensual explicit images. Non-paying users face a paywall when attempting to generate nude images of women.
Nvidia's CEO, Jensen Huang, criticizes negative narratives around AI, calling them "extremely hurtful." Huang argues that science fiction speculations about AI are not connected to reality and fuel unjustified pessimism.
Elon Musk's xAI's Grok app remains available on the Google Play Store despite policies explicitly banning such apps. Content restrictions on Grok have recently been loosened, leading to the creation of non-consensual sexual imagery, including content involving minors. Google is not enforcing its own rules, while Apple, although offering the app, has less stringent policies.
Apple and Google have embarked on a non-exclusive, multi-year partnership. Apple will use Gemini models and Google cloud technology for future foundational models, integrating Google's artificial intelligence into key features like Siri.
The UK media regulator Ofcom has launched an investigation into X (formerly Twitter) following the discovery that the Grok chatbot generated thousands of sexualized images of women and children. The investigation aims to verify whether X has violated the UK's Online Safety Act, which requires platforms to block illegal content and protect children from pornography. Ofcom is concerned about the use of Grok to create and share illegal non-consensual intimate images and child sexual abuse material.
Chatbots are increasingly used as virtual companions, especially among teenagers. However, concerns are emerging related to AI-induced delusions and false beliefs. Several families have filed lawsuits against OpenAI and Character.AI, claiming that the behavior of the models contributed to the suicide of some teenagers. New regulations are looming to curb the problematic use of these tools.
Large language models (LLMs) have become ubiquitous, but their internal complexity remains a mystery. New "mechanistic interpretability" techniques allow researchers to examine the inner workings of these models, identifying key concepts and tracing the path from prompts to responses. Companies like Anthropic, OpenAI, and Google DeepMind are pioneering these studies, aiming to better understand the limitations of LLMs and prevent unexpected behaviors.
A new hybrid framework leverages Large Language Models (LLMs) to enhance financial transaction analysis. The system uses LLM-generated embeddings to initialize lightweight transaction models, balancing accuracy and operational efficiency. The approach includes multi-source data fusion, noise filtering, and context-aware enrichment, leading to significant performance improvements.
Researchers introduce TIME, a framework that enhances large language models (LLMs) by making them more sensitive to temporal context. TIME allows models to trigger explicit reasoning based on temporal and discourse cues, optimizing efficiency and accuracy. The framework was evaluated with TIMEBench, a specific benchmark for dialogues with temporal elements, demonstrating significant improvements over baseline models.
NAIAD, an AI system leveraging Large Language Models (LLMs) and external analytical tools for inland water monitoring, has been introduced. Designed for both experts and non-experts, NAIAD offers a simplified interface to transform natural language queries into actionable insights, integrating weather data, satellite imagery, and established platforms. Initial tests highlight its adaptability and robustness.
The Claude language model is expanding into the healthcare and life sciences sectors. The goal is to provide advanced solutions for research, diagnostics, and patient care, leveraging artificial intelligence capabilities to improve efficiency and accuracy in these crucial fields.
Google has removed the AI Overview feature for specific health-related queries. This decision follows an investigation by the Guardian that revealed Google's AI was providing misleading information in response to health questions.
Google has announced a new protocol that allows merchants to offer discounts to users directly through AI mode results. The initiative aims to simplify commercial interactions by leveraging artificial intelligence.
ChatGPT Health is launching, a new solution designed to securely connect health data and applications, ensuring privacy protection and a physician-informed design. The goal is to provide a dedicated and reliable experience in the healthcare sector.
Tolan has built a voice-first AI companion using GPT-5.1, delivering low-latency responses, real-time context reconstruction, and memory-driven personalities. The aim is to create more natural and engaging conversations for the user.
OpenAI has announced a new offering for the healthcare sector, focused on enterprise-grade artificial intelligence. The solution is designed to support HIPAA compliance, reduce administrative burdens, and improve clinical workflows, opening new perspectives for innovation in the field of medicine.
Netomi scales enterprise AI agents using GPT-4.1 and GPT-5.2. The platform combines concurrency, governance, and multi-step reasoning for reliable production workflows. The goal is to provide robust and scalable solutions for enterprises looking to integrate AI into their operational processes.
OpenAI is reportedly asking contractors to upload samples of their past work. An intellectual property lawyer warns that this practice could expose the company to significant legal risks. The request raises questions about copyright management and intellectual property ownership.
Indonesian officials have temporarily blocked access to xAI’s chatbot Grok. The decision was made following the spread of sexualized deepfakes generated without consent. The block is temporary, pending further verification and adjustments.