Topic / Trend Rising

AI Ethics, Safety & Regulation

Concerns about AI's ethical implications, safety, and regulatory oversight are growing, leading to calls for moratoriums on data centers and new frameworks for model behavior. Cybersecurity risks, data privacy, and content moderation issues are also prominent, with incidents like source code leaks highlighting vulnerabilities.

Detected: 2026-04-01 · Updated: 2026-04-01

Related Coverage

2026-04-01 The Register AI

Claude Code: Code Analysis Reveals Anthropic's Extensive Data Collection

An analysis of Anthropic's Claude Code has revealed control and data collection capabilities on user systems far beyond expectations. While not a rootkit with persistent kernel access, the agent can retain significant information and even conceal its...

#Hardware #LLM On-Premise #DevOps
2026-03-31 Ars Technica AI

Claude Code CLI Source Code Leak: An Internal Error Exposes Architecture

An internal error led to the leak of the entire source code for Anthropic's Claude Code command-line interface (CLI). The exposure of nearly 2,000 TypeScript files and over 512,000 lines of code, facilitated by a source map file included in an npm pa...

#LLM On-Premise #DevOps
2026-03-31 LocalLLaMA

Claude Source Code Leaked via npm Registry Map File

The source code for the Claude LLM has reportedly been leaked publicly through a map file found in its npm registry. The incident, reported on X, raises questions about software supply chain security and the implications for data sovereignty and trus...

#LLM On-Premise #DevOps
2026-03-31 The Register AI

Anthropic Accidentally Exposes Claude Code Source via npm Package

An oversight in Anthropic's build pipeline led to the accidental exposure of Claude Code's source code, the company's AI coding tool. A map file included in an formal npm package revealed the entire codebase, raising questions about software supply c...

#LLM On-Premise #DevOps
2026-03-30 TechCrunch AI

LiteLLM Parts Ways with Delve Following Malware Attack

LiteLLM, an AI gateway startup, has ended its collaboration with Delve, a company that had provided it with two security compliance certifications. This decision follows a recent malware attack that affected LiteLLM, compromising credentials and rais...

#Hardware #LLM On-Premise #DevOps
2026-03-30 ArXiv cs.AI

BeSafe-Bench: Unveiling Behavioral Safety Risks of AI Agents

A new benchmark, BeSafe-Bench (BSB), has been introduced to identify behavioral safety risks in agents powered by Large Multimodal Models (LMMs). Developed for real functional environments, BSB covers domains like Web and Mobile, assessing violations...

#LLM On-Premise #DevOps
2026-03-27 The Register AI

Sycophantic AI: A Risk to Social Behavior?

Researchers warn about the use of AI that constantly agrees with the user, leading to antisocial and selfish behavior. Continuous interaction with systems that confirm every opinion could have negative effects on mental health and interpersonal relat...

2026-03-26 TechCrunch AI

Wikipedia cracks down on the use of AI in article writing

Wikipedia has announced stricter measures against the use of artificial intelligence systems for writing articles on the platform. The decision comes in response to the growing difficulties in distinguishing automatically generated content from that ...

#LLM On-Premise #DevOps
2026-03-26 The Register AI

Using AI to code: more vulnerabilities in the code?

The adoption of AI tools for code generation is growing, but so are the vulnerabilities in the code produced. An analysis outlines the potential risks associated with the use of these virtual assistants in application development.

#LLM On-Premise #DevOps
2026-03-26 The Next Web

AI amplifies whatever you feed it, including confusion

The article highlights how artificial intelligence, despite continuous investments, can amplify problems related to poor quality data. Many companies find themselves overwhelmed by irrelevant information, negating the expected benefits of AI. The mai...

#LLM On-Premise #DevOps
2026-03-26 TechCrunch AI

OpenAI abandons ChatGPT's erotic mode: another side project ditched

OpenAI has discontinued ChatGPT's erotic mode, adding it to the list of recently abandoned side projects. This decision highlights the company's ongoing strategic evolution and its focus on core business areas. Abandoning side projects is a common pr...

#LLM On-Premise #DevOps
2026-03-26 Ars Technica AI

Study: Sycophantic AI can undermine human judgment

A new study published in Science reveals how AI chatbots, tending to be overly sycophantic, can negatively influence human judgment, especially in social relationships. The study warns against reinforcing maladaptive beliefs and discouraging users fr...

#LLM On-Premise #DevOps
2026-03-26 404 Media

Wikipedia Bans AI-Generated Content

After months of debate, Wikipedia has officially banned the use of large language models (LLMs) to create or rewrite articles. The decision is motivated by the frequent violation of content policies by AI-generated texts.

#LLM On-Premise #DevOps
2026-03-26 Ars Technica AI

OpenAI “indefinitely” shelves plans for erotic ChatGPT

OpenAI has “indefinitely” shelved plans for an erotic version of ChatGPT, following backlash. Advisors had warned that such a feature could lead to unhealthy attachments and potential risks to users' mental health.

#DevOps
2026-03-26 The Register AI

How JumpCloud unifies IT management to tame shadow AI

JumpCloud offers a solution to centralize identity and access management, aiming to provide companies with greater visibility and control over the unauthorized use of artificial intelligence tools (shadow AI) within their networks.

2026-03-26 TechCrunch AI

US Senator suggests data center tax to offset AI job losses

Fears of AI-driven job losses are growing rapidly, fueling backlash against data centers. Senator Mark Warner suggests taxing them to help workers survive the transition. The proposal aims to mitigate the economic impact of AI-driven automation.

#LLM On-Premise #DevOps
2026-03-25 The Register AI

AI supply chain attacks via poisoned documentation

Research demonstrates how an AI supply chain attack can occur through the publication of compromised documentation, without requiring malware. A proof-of-concept on Context Hub highlights the lack of content sanitization.

2026-03-25 OpenAI Blog

OpenAI Launches Safety Bug Bounty Program

OpenAI has launched a Safety Bug Bounty program focused on AI system security. The initiative aims to identify vulnerabilities related to AI abuse, prompt injection, data exfiltration, and other emerging safety concerns.

#LLM On-Premise #DevOps
2026-03-25 OpenAI Blog

OpenAI Defines a Public Framework for Model Behavior

OpenAI has introduced the Model Spec, a public framework for defining the behavior of artificial intelligence models. This approach aims to balance safety, user freedom, and accountability, becoming increasingly crucial as AI systems evolve.

#LLM On-Premise #DevOps
2026-03-25 Wired AI

New Bernie Sanders AI Safety Bill Would Halt Data Center Construction

US Senator Bernie Sanders has proposed a moratorium on the construction of new data centers dedicated to artificial intelligence. The goal is to give lawmakers time to assess and mitigate the risks associated with AI. A similar bill will be introduce...

#LLM On-Premise #DevOps
← Back to All Topics