Hong Kong’s privacy watchdog has raised concerns over the potential misuse of the artificial intelligence (AI) chatbot Grok, developed by Elon Musk’s company. Using its image-generation function to create indecent or malicious content could amount to criminal offences. The warning follows concerns raised about Grok’s image-editing function that allowed users to digitally “undress” real people.
OpenAI launches a version of ChatGPT designed to answer health-related questions. The initiative stems from the observation that many users already use artificial intelligence as a source of medical information, a confidant, or to get a second opinion. The company has therefore decided to capitalize on this trend, developing a specific product.
Artificial Intelligence (AI) is ubiquitous, from content suggestions on streaming platforms to digital advertising. However, generative AI represents a significant evolution, opening new frontiers in automation and content creation. An article by The Next Web explores this paradigm shift, highlighting how generative AI is redefining the technological landscape and its future applications.
Google is inviting Gemini users to allow the chatbot to access their Gmail, Photos, Search history, and YouTube data in exchange for potentially more personalized responses. The company states that private data will remain private and will not be used for model training.
A 2025 Google survey reveals that artificial intelligence tools are increasingly used for learning. Students and teachers are emerging as the biggest adopters of these new technologies, opening new frontiers in education and personal training. The survey highlights how AI is becoming a valuable resource for acquiring new skills and knowledge.
New research reveals how large language models (LLMs) are susceptible to "jailbreak" techniques that use culturally structured narratives. The attack, called "Adversarial Tales", exploits cyberpunk elements to induce models to perform harmful analyses by passing them off as narrative interpretations. The study highlights a widespread vulnerability and the need to better understand how models interpret and respond to such stimuli.
A new study questions the effectiveness of multi-agent systems based on large language models (LLMs). The findings show that selecting the best response from a single model significantly outperforms complex deliberation protocols, with a 6x performance gap and lower computational costs. The research challenges the assumption that increased complexity automatically leads to better results in these systems.
Spectral Generative Flow Models (SGFM) are proposed as an alternative to transformer-based large language models. By leveraging constrained stochastic dynamics in a multiscale wavelet basis, SGFM offers a generative mechanism grounded in continuity, geometry, and physical structure, promising long-range coherence and multimodal generality.
A new framework, Explanation-Guided Training (EGT), promises to improve the interpretability and consistency of early-exit neural networks. EGT aligns attention maps of intermediate layers with the final exit, optimizing accuracy and consistency. Results show a 1.97x inference speedup while maintaining 98.97% accuracy and improving attention consistency by 18.5%. This approach makes early-exit networks more suitable for explainable AI applications in resource-constrained environments.
Two cofounders of Thinking Machines Lab are leaving the company to rejoin OpenAI. The news is a blow for Thinking Machines Lab, and different narratives are already emerging about what happened.
The California Attorney General has launched a formal investigation into Elon Musk's xAI after its chatbot Grok generated nonconsensual sexual images, including those of minors. Musk denies any awareness of the issue.
Microsoft has fixed a vulnerability in Copilot that allowed attackers to steal sensitive user data with a single click on a URL. The flaw was discovered by Varonis researchers, who demonstrated how it was possible to exfiltrate personal data and chat history details, bypassing enterprise security controls. The attack continued even after the chat was closed, without further user interaction.
California Attorney General Rob Bonta has launched an investigation into Grok, Elon Musk's xAI's AI, following the generation of sexual images, including those of minors. The investigation aims to determine whether Grok violates US laws, particularly regarding the creation of non-consensual deepfakes used for online harassment.
OpenAI has partnered with Cerebras to integrate 750MW of high-speed AI compute. The goal is to reduce inference latency and make ChatGPT faster for real-time workloads.
The adoption of AI agents and chatbots brings new security challenges for businesses. Companies must protect sensitive data and ensure regulatory compliance, preventing data leaks and unauthorized access. Managing AI-related risks has become a top priority for enterprises.
AI models, starting with GPT 5.2, are demonstrating increasing capabilities in solving complex mathematical problems. The impact of these tools is being felt in various fields, opening new perspectives for research and innovation in the field of mathematics.
AI models are getting so good at finding vulnerabilities that some experts say the tech industry might need to rethink how software is built.
The Trends Explore page for users to analyze search interest just got a major upgrade. It now uses Gemini to identify and compare relevant trends.
Google is integrating "personal intelligence" into Gemini, allowing the chatbot to connect to Gmail, Photos, Search, and YouTube. The goal is to provide more useful and personalized answers. The feature is optional and available to AI Pro and AI Ultra subscribers, who can choose which data sources to connect. The integration leverages Google's vast amount of personal data to improve the accuracy of responses.
The West Midlands police admitted using hallucinated information from Microsoft Copilot to ban Maccabi Tel Aviv football fans from the UK. Initially denied, the use of AI was confirmed after weeks of controversy surrounding a safety advisory group meeting for the Aston Villa-Maccabi Tel Aviv match, amid heightened tensions following a terrorist attack in Manchester.