Topic / Trend Rising

AI Safety, Ethics, and Regulation

As AI becomes more pervasive, concerns about its safety, ethical implications, and the need for regulation are growing. This includes addressing issues like bias, misuse, and potential existential risks, prompting discussions among policymakers, researchers, and the public.

Detected: 2026-01-21 · Updated: 2026-01-21

Related Coverage

2026-01-21 The Register AI

OpenAI: Age Prediction Model for ChatGPT Users

OpenAI has begun deploying an age prediction model for its ChatGPT users. The goal is to filter access to sensitive or potentially harmful content for underage users. This initiative could unlock new monetization opportunities by restricting access b...

2026-01-20 TechCrunch AI

ChatGPT: age estimation to protect young users

OpenAI introduces a new feature in ChatGPT: the model now estimates the age of users. The goal is to prevent the delivery of potentially problematic content to individuals under 18, strengthening safety measures for young people.

2026-01-20 OpenAI Blog

ChatGPT: Age Prediction Rollout for Enhanced Online Safety

OpenAI is rolling out age estimation on ChatGPT to protect younger users. The system assesses whether an account belongs to a minor or an adult, applying specific safeguards for teenagers. The company plans to progressively improve the model's accura...

2026-01-20 The Register AI

VoidLink: Linux malware targeting the cloud, written by an AI agent

A new Linux malware, named VoidLink, has been discovered targeting cloud infrastructures. What makes it special? According to researchers, it was developed almost entirely by an artificial intelligence agent, likely by a single individual. VoidLink u...

2026-01-19 404 Media

ICE’s Facial Recognition App Misidentified a Woman. Twice

The Mobile Fortify app, used by Immigration and Customs Enforcement (ICE) to identify individuals and determine their immigration status, provided two incorrect names for the same woman during a check. The incident raises doubts about the accuracy of...

2026-01-19 The Register AI

Police chief suspended after AI hallucination: police chief resigns

The chief constable of West Midlands Police has resigned after his police force used fictional output from Microsoft Copilot in deciding to ban Israeli fans from attending a football match. The officer had denied the use of artificial intelligence sy...

2026-01-17 LocalLLaMA

ChatGPT logs everything you type, even if you delete it

Be careful when using ChatGPT: the platform logs every character you type, including sensitive data such as API keys. Even if you delete the text before sending it, the information may have already been stored. Exercise extreme caution with confident...

2026-01-16 Ars Technica AI

Mother of Elon Musk’s child sues xAI over sexualized deepfakes

Ashley St Clair, an influencer and mother of one of Elon Musk’s children, has sued the billionaire's AI company, accusing its Grok chatbot of creating fake sexual imagery of her without her consent. St Clair claims she requested xAI to stop creating ...

2026-01-16 ArXiv cs.AI

AI Survival Stories: a Taxonomic Analysis of AI Existential Risk

A new study analyzes the existential risk posed by artificial intelligence (AI) systems. The paper proposes a framework for assessing threats, examining two fundamental premises: the increasing power of AI and the potential danger it poses to humanit...

2026-01-15 Wired AI

Elon Musk's Grok: Issues with Explicit Images Persist

Despite restrictions implemented by X, Grok continues to generate explicit images. Tests reveal that the current limitations are insufficient to fully address the issue, creating a patchwork situation.

2026-01-15 Channel News Asia

Philippines to Ban Grok Over Deepfakes Despite X's Pledges

The Philippines plans to ban Grok, X's language model, due to deepfake concerns. According to the acting executive director of the country's cybercrime center, X's pledge to limit access to Grok will not affect the government's plans.

2026-01-15 ArXiv cs.CL

LLMs vulnerable: new attacks exploit cyberpunk narratives

New research reveals how large language models (LLMs) are susceptible to "jailbreak" techniques that use culturally structured narratives. The attack, called "Adversarial Tales", exploits cyberpunk elements to induce models to perform harmful analyse...

2026-01-14 Ars Technica AI

UK police used Copilot AI “hallucination” to ban football fans

The West Midlands police admitted using hallucinated information from Microsoft Copilot to ban Maccabi Tel Aviv football fans from the UK. Initially denied, the use of AI was confirmed after weeks of controversy surrounding a safety advisory group me...

← Back to All Topics