Topic / Trend Rising

AI Security and Ethical Concerns

As AI becomes more prevalent, concerns about security vulnerabilities, misuse, and ethical implications are growing. This includes issues like malware distribution through AI assistants, prompt injection attacks, bias in AI systems, and the potential for AI to be used for malicious purposes.

Detected: 2026-02-07 · Updated: 2026-02-07

Related Coverage

2026-02-07 LocalLLaMA

OpenClaw: Vulnerability Discovered in Malware Delivery Chain

A 1Password researcher discovered that a top-downloaded OpenClaw skill was actually a staged malware delivery chain. The skill, promising Twitter integration, guided users to run obfuscated commands that installed macOS malware capable of stealing cr...

#LLM On-Premise #DevOps
2026-02-06 Ars Technica AI

Lawyer loses case over AI errors: randomly quoted Bradbury

A New York federal judge terminated a case due to a lawyer's repeated misuse of AI. The filings contained fake citations and an overly elaborate writing style, with out-of-place references to ancient libraries and Ray Bradbury's Fahrenheit 451. Reque...

#LLM On-Premise #DevOps
2026-02-06 404 Media

The Neverending Cybersecurity Story: An Analysis

A recent article explores the ever-evolving challenges in cybersecurity, with a particular focus on mobile forensics. The article highlights how authorities are facing increasing difficulties in accessing protected devices, citing the example of a Wa...

#LLM On-Premise #DevOps
2026-02-05 OpenAI Blog

OpenAI introduces Trusted Access for Cyber

OpenAI introduces Trusted Access for Cyber, a trust-based framework that expands access to frontier cyber capabilities while strengthening safeguards against misuse. The initiative aims to balance innovation with responsibility in the cybersecurity f...

2026-02-05 Ars Technica AI

Increase of AI bots on the Internet sparks arms race

A new report indicates that AI-powered bots already account for a meaningful share of web traffic. An increasingly sophisticated arms race is unfolding, as bots deploy clever tactics to bypass website defenses meant to keep them out.

#LLM On-Premise #DevOps
2026-02-05 AI News

Microsoft unveils method to detect sleeper agent backdoors

Microsoft researchers have unveiled a scanning method to identify poisoned AI models with backdoors, even without knowing the specific trigger or the attack's ultimate goal. The method exploits the tendency of these models to memorize training data a...

#DevOps
2026-02-05 DigiTimes

Samsung strengthens semiconductor supply chain cybersecurity

Samsung is strengthening cybersecurity measures in its semiconductor supply chain to prevent leaks of sensitive technological information. The initiative aims to protect intellectual property and trade secrets in the chip industry.

#LLM On-Premise #DevOps
2026-02-05 The Register AI

LLM: Sleeper-Agent Backdoors, a Sci-Fi Security Threat

Large language models (LLMs) face complex security threats, such as sleeper-agent backdoors. These hard-to-detect attacks compromise the integrity and security of the models, opening up sci-fi-like scenarios.

#LLM On-Premise #DevOps
2026-02-04 The Register AI

Clouds rush to deliver OpenClaw-as-a-service offerings

Despite Gartner's warnings about the cybersecurity risks associated with the OpenClaw AI assistant, several cloud platforms have started offering it as a service. The decision raises questions about prioritizing speed of deployment over data security...

#LLM On-Premise #DevOps
2026-02-03 404 Media

Hackers Target ICE Spotting Apps: User Data at Risk?

Applications used to report sightings of ICE (Immigration and Customs Enforcement) agents have been targeted by hackers. Attackers sent threatening messages to users, claiming to have compromised their data and shared it with authorities. While there...

#LLM On-Premise #DevOps
2026-02-03 404 Media

Wedding Photo Booth Company Exposes Customers’ Drunken Photos

A photo booth company, Curator Live, has exposed a large cache of customers’ photos, often showing them in compromising situations. A security researcher flagged the issue, highlighting the privacy risks associated with data collection even at privat...

2026-02-03 Ars Technica AI

The rise of Moltbook: viral AI prompts, the next big security threat?

A new platform of AI agents sharing instructions via prompts could replicate the history of the Morris worm. A programming error could lead to uncontrolled spread, with potentially serious consequences for connected systems. The similarity to the 198...

#LLM On-Premise #DevOps
2026-02-03 The Register AI

OpenClaw: DIY AI bot farm is a security 'dumpster fire'

OpenClaw, an AI-powered personal assistant that users interact with via messaging apps, has prompted a wave of malware and is delivering some shocking bills. Its architecture raises serious concerns about user data and credential security.

#LLM On-Premise #DevOps
2026-02-03 LocalLLaMA

Prompt injection alert on Moltbook: crypto wallet drain

A researcher discovered a prompt injection payload on Moltbook designed to drain funds from cryptocurrency wallets. The payload, disguised as a technical guide, exploits vulnerabilities in AI agents that process social feeds. The attack highlights th...

#LLM On-Premise #DevOps
2026-02-02 The Register AI

OpenClaw patches one-click RCE as security Whac-A-Mole continues

Researchers have disclosed a rapid exploit chain that let attackers run code via a single malicious web page. Multiple projects are patching bot takeover and remote code execution (RCE) exploits within the OpenClaw ecosystem, formerly known as ClawdB...

2026-02-01 Tom's Hardware

Malicious OpenClaw ‘skill’ targets crypto users on ClawHub

Security researchers are warning that the growing ecosystem around ‘OpenClaw,’ the self-hosted AI assistant formerly known as Clawdbot, has already become a target for malware distribution. Fourteen malicious skills were uploaded to ClawHub last mont...

#LLM On-Premise #DevOps
2026-02-01 LocalLLaMA

Exposed Moltbook Database: Full Control of AI Agents

A vulnerability in the Moltbook database allowed anyone to take control of the AI agents on the platform. The incident raises serious concerns about security and access management in systems that orchestrate the execution of large language models.

#LLM On-Premise #DevOps
2026-02-01 404 Media

Moltbook: Vulnerability Exposes Control of AI Agents

A security vulnerability in Moltbook, a social media platform for AI agents, exposed APIs, allowing anyone to take control of the agents and post content on their behalf. The vulnerability, discovered by a researcher, stemmed from a misconfiguration ...

#LLM On-Premise #DevOps
2026-01-31 Wired AI

Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims

An informant claims Jeffrey Epstein had a personal hacker. In cybersecurity news, an AI agent named OpenClaw is causing concern among experts. China has executed 11 people involved in scam compounds. A $40 million crypto theft has an unexpected alleg...

← Back to All Topics