Topic / Trend Rising

AI Safety, Security, and Ethical Concerns

As AI becomes more pervasive, concerns are growing about its potential misuse, including generating inappropriate content, creating security vulnerabilities, and raising ethical questions about data privacy and bias. Companies and regulators are grappling with how to address these challenges.

Detected: 2026-02-04 · Updated: 2026-02-19

Related Coverage

2026-02-18 The Register AI

Copilot spills the beans, summarizing emails it's not supposed to read

Microsoft 365 Copilot Chat has been summarizing emails labeled “confidential” even when data loss prevention policies were configured to prevent it. The bot couldn't keep its prying eyes away. The incident raises concerns about data security and the ...

#LLM On-Premise #DevOps
2026-02-18 Wired AI

AI and Climate: Big Tech Claims Lack Solid Evidence, Report Finds

A new report analyzes 154 claims about the positive impact of AI on the climate. Only a quarter of these cite academic research as support, while a third provide no evidence at all. The analysis raises doubts about the actual validity of Big Tech's p...

2026-02-18 The Register AI

HackerOne clarifies terms: researcher data not used to train AI

HackerOne has clarified its stance on the use of GenAI after researchers expressed concerns that their submissions were being used to train models. The company emphasized the importance of security researchers and denied that their data is used as tr...

#LLM On-Premise #DevOps
2026-02-18 Tech.eu

Tenet launches €80M fund for "AI roll-ups" in Europe

Berlin-based investment firm Tenet has launched an €80 million fund focused on "AI roll-ups," a strategy involving acquiring companies in service sectors and integrating AI to improve efficiency. Tenet has already invested in Taxforce, a German AI-na...

2026-02-17 404 Media

AI-Powered Private School: Faulty Lesson Plans and 'Scraped' Web Data

Alpha School, a private school heavily reliant on AI for teaching, is under scrutiny. Internal documents reveal AI-generated lesson plans that sometimes do more harm than good. The school is also accused of scraping data from other online courses wit...

#LLM On-Premise #DevOps
2026-02-17 The Next Web

European Parliament disables AI on work devices due to privacy risks

The European Parliament has disabled built-in artificial intelligence features on work devices used by lawmakers and staff. The decision is motivated by unresolved concerns about data security, privacy, and the opaque nature of cloud-based AI process...

#LLM On-Premise #DevOps
2026-02-17 Tom's Hardware

Smart sleep mask reveals security flaws in brainwave data

An engineer discovered that a smart sleep mask, due to software vulnerabilities and hardcoded credentials, can potentially expose users' brainwaves. The discovery raises serious concerns about the privacy and security of IoT devices.

2026-02-17 The Register AI

X's Grok AI under investigation for inappropriate image generation

The Irish Data Protection Commission (DPC) has launched an investigation into X (formerly Twitter) following reports of problematic image generation by the Grok AI chatbot. The investigation adds to a growing number of regulatory checks.

#LLM On-Premise #DevOps
2026-02-16 404 Media

Underground Facial Recognition Tool Unmasks Camgirls

An underground site uses facial recognition to reveal the platforms camgirls use, potentially exposing them to privacy risks. Misuse of photos from social media could reveal their activity to stalkers, harassers, or employers. The site's creator clai...

#LLM On-Premise #DevOps
2026-02-16 ArXiv cs.CL

Bias in LLM Agents: Persona Assignment Affects Robustness

A new study reveals that assigning demographic-based personas to large language models (LLMs) can introduce biases and degrade performance across various scenarios, with performance drops of up to 26%. The research highlights a critical vulnerability...

#LLM On-Premise #DevOps
2026-02-15 TechCrunch AI

Anthropic and the Pentagon Reportedly Arguing Over Claude Usage

According to a new report in Axios, the Pentagon is pushing AI companies, including Anthropic, OpenAI, Google, and xAI, to allow the U.S. military to use their technology for “all lawful purposes.” Anthropic is reportedly pushing back against this de...

#LLM On-Premise #DevOps
2026-02-15 IEEE Spectrum

NASA Let AI Drive the Perseverance Rover

NASA has taken another step towards autonomous surface rovers, using AI to generate waypoints for the Perseverance rover. The model, based on Anthropic's Claude AI, analyzed orbital images and digital elevation models to identify hazards and generate...

2026-02-15 Tech in Asia

India Tech Titans Struggle as AI Risks Accelerate

India's tech sector faces a significant sell-off, signaling a recalibration of expectations. Simultaneously, concerns are mounting about the accelerating risks associated with artificial intelligence, as highlighted by former Anthropic researchers.

#LLM On-Premise #DevOps
2026-02-14 TechCrunch AI

xAI: Is Grok going to be more unhinged? Ex-employee speaks

Elon Musk is reportedly "actively" working to make xAI's Grok chatbot "more unhinged," according to a former employee. The news raises questions about safety and quality control policies within the company.

#LLM On-Premise #DevOps
2026-02-14 The Register AI

Google and OpenAI warn: AI models at risk of cloning

Google and OpenAI have raised concerns about competitors, including China's DeepSeek, probing their AI models to steal underlying reasoning and replicate capabilities. This practice raises questions about intellectual property protection in the AI se...

#LLM On-Premise #DevOps
2026-02-13 OpenAI Blog

ChatGPT: new defenses against prompt injection attacks

OpenAI introduces Lockdown Mode and Elevated Risk labels in ChatGPT to protect organizations from prompt injection attacks and AI-driven data exfiltration. The new features aim to strengthen data security and prevent misuse of the model.

#LLM On-Premise #DevOps
2026-02-13 TechCrunch AI

Meta plans to add facial recognition to its smart glasses, report claims

Meta is reportedly developing a feature, internally known as "Name Tag," that would allow its smart glasses to identify people and provide information about them via Meta's AI assistant. The technology raises questions about privacy and the use of pe...

#LLM On-Premise #DevOps
2026-02-13 The Register AI

Misconfigured AI: Could it Trigger Infrastructure Meltdown?

Gartner warns that the rapid rollout of AI systems into critical infrastructure raises the risk of outages. A misconfigured AI system could trigger national-scale blackouts, potentially surpassing the threats posed by cyberattacks or extreme weather ...

#LLM On-Premise #DevOps
2026-02-13 Tom's Hardware

Google: State-sponsored hackers using Gemini in attacks

Google reports that state-sponsored actors from China, Russia, and Iran are leveraging Gemini in various stages of cyberattacks. The AI is being used for phishing, malicious code development, and vulnerability testing, enhancing the offensive capabil...

#LLM On-Premise #DevOps
2026-02-12 Ars Technica AI

Google: Gemini targeted by prompt attacks to clone the model

Google reveals that actors attempted to extract knowledge from its Gemini model via extensive prompting, aiming to train cheaper copycat models. The company defines these illicit activities as intellectual property theft, raising questions about the ...

#LLM On-Premise #DevOps
2026-02-12 404 Media

AI Abuse: Nude AI Images Created, OnlyFans Opened in Her Name

A woman was victimized by AI-generated images. Strangers created nude images from her profile and opened an OnlyFans account in her name. The incident occurred during a surge in the generation of sexual images via AI, raising questions about the misu...

2026-02-12 AI News

State-sponsored hackers exploit AI for advanced cyberattacks

State-sponsored hackers are exploiting AI models like Gemini to refine phishing attacks and develop malware. Groups from Iran, North Korea, China, and Russia are leveraging AI for reconnaissance, social engineering, and malware development, increasin...

2026-02-06 DigiTimes

South Korea aims to lead global quantum chip manufacturing by 2035

South Korea has announced an ambitious plan to become a global leader in quantum chip manufacturing by 2035. The initiative aims to position the country at the forefront of this emerging technological sector, crucial for the future of high-performanc...

#Hardware #LLM On-Premise #DevOps
2026-02-06 ArXiv cs.LG

A Causal Perspective for Enhancing Jailbreak Attack and Defense

New research proposes Causal Analyst, a framework to identify the direct causes of jailbreaks in large language models (LLMs). The system uses causal analysis to enhance both attacks and defenses, demonstrating how specific prompt features can trigge...

#LLM On-Premise #Fine-Tuning #DevOps
2026-02-06 LocalLLaMA

Qwen3-235B: User Praises Local Performance

A user shared their positive experience with the Qwen3-235B language model, running it on a desktop system. The user highlighted the model's accuracy and utility, to the point of preferring it over a commercial ChatGPT subscription.

#LLM On-Premise #DevOps
2026-02-06 LocalLLaMA

Qwen3-Coder: improved performance on RTX 5090 with llama.cpp

A user reported a significant throughput increase, up to 26 tokens/second, using the Qwen3-Coder-Next-Q4_K_S model with llama.cpp on an RTX 5090. The optimization was achieved by offloading MoE expert tensors to the CPU and quantizing the KV cache.

#Hardware #LLM On-Premise
2026-02-06 DigiTimes

Taiwan's drone exports surge, targeting NT$20 billion

Taiwan's drone exports are surging, with the economics ministry confident in reaching the NT$20 billion target. This increase reflects the growing global demand for drones in both civilian and military applications, and Taiwan's ability to compete in...

2026-02-06 DigiTimes

Largan posts 11% yearly revenue gain despite seasonal slowdown

Optics manufacturer Largan reported an 11% increase in yearly revenue, despite a seasonal slowdown. The company, specializing in smartphone components, continues to benefit from demand in the sector, while still being affected by typical market fluct...

#LLM On-Premise
2026-02-06 DigiTimes

CSPs turn to custom silicio to break Nvidia dependence

Cloud service providers (CSPs) are exploring custom silicio solutions to diversify their hardware options and reduce dependence on traditional vendors like Nvidia. This trend could lead to new architectures optimized for specific workloads.

#Hardware #LLM On-Premise #DevOps
2026-02-05 404 Media

US DOJ Redacted Mona Lisa Photo in Epstein Files

The US Department of Justice redacted the face of the Mona Lisa in a 2009 email, part of the files related to Jeffrey Epstein. Simultaneously, sensitive data of victims were released online, raising criticism about the department's actions.

2026-02-05 Phoronix

GNU Nettle 4.0 Released With SLH-DSA Support

The GNU Nettle cryptographic library has a major new update that introduces support for SLH-DSA, the post-quantum signature scheme selected by NIST for the FIPS 205 standard.

2026-02-05 The Register AI

Anthropic apes OpenAI with cheeky chatbot commercials

Anthropic, the maker of Claude, appears to be taking a jab at OpenAI with an ad campaign alluding to the latter's plans. AI companies are looking for new ways to spend resources, other than model training. One strategy is to buy high-profile ad space...

#LLM On-Premise #DevOps
2026-02-05 PyTorch Blog

PyTorch for Recommendation Systems: Building Highly Efficient Inference

Meta has developed a PyTorch-based inference system for recommendations, crucial for translating advanced research into production services. The article describes the workflow, from the definition of the trained model to inference transformations, op...

#Hardware #LLM On-Premise #DevOps
2026-02-05 OpenAI Blog

OpenAI introduces Trusted Access for Cyber

OpenAI introduces Trusted Access for Cyber, a trust-based framework that expands access to frontier cyber capabilities while strengthening safeguards against misuse. The initiative aims to balance innovation with responsibility in the cybersecurity f...

2026-02-05 LocalLLaMA

DeepBrainz-R1: Small Models for Agentic Workflows Released

DeepBrainz has released DeepBrainz-R1, a family of small language models (4B, 2B, 0.6B) focused on reasoning for agentic workflows. Optimized for multi-step reasoning and stability in tool-calling, these Apache 2.0 models aim to provide predictable b...

#LLM On-Premise #DevOps
2026-02-05 The Register AI

SAP Migrations: Budgets and Timelines Often Exceeded, Research Finds

Nearly 60% of SAP migration projects are delayed and over budget, according to ISG research. Organizations often underestimate complexity, allow scope expansion, and fail to understand internal constraints. ECC support ends in 2027.

#LLM On-Premise #DevOps
2026-02-05 Phoronix

Debian Restricts CI Data Access Due to LLM Scrapers / Bot Traffic

Debian's continuous integration (CI) infrastructure has restricted public access to its data due to excessive scraping by bots used to train large language models (LLMs). The load generated by these scrapers has impacted web server resources.

#LLM On-Premise #DevOps
2026-02-05 The Register AI

UK's 'world-first' deepfake detection framework faces scrutiny

The UK government, in collaboration with Microsoft, announces a framework to evaluate deepfake detection technologies, responding to the exponential growth of AI-generated content. However, industry experts express doubts about the actual effectivene...

#LLM On-Premise #DevOps
2026-02-05 Ars Technica AI

Increase of AI bots on the Internet sparks arms race

A new report indicates that AI-powered bots already account for a meaningful share of web traffic. An increasingly sophisticated arms race is unfolding, as bots deploy clever tactics to bypass website defenses meant to keep them out.

#LLM On-Premise #DevOps
2026-02-05 The Register AI

n8n security woes roll on as new critical flaws bypass December fix

Multiple newly disclosed bugs in the popular workflow automation tool n8n could allow attackers to hijack servers, steal credentials, and quietly disrupt AI-driven business processes. The patch meant to close a severe expression bug fails to stop att...

#LLM On-Premise #DevOps
2026-02-05 AI News

Microsoft unveils method to detect sleeper agent backdoors

Microsoft researchers have unveiled a scanning method to identify poisoned AI models with backdoors, even without knowing the specific trigger or the attack's ultimate goal. The method exploits the tendency of these models to memorize training data a...

#DevOps
2026-02-05 Wired AI

Hollywood Is Losing Audiences to AI Fatigue

Entertainment about or made with artificial intelligence has been missing the mark with viewers over the past year. After a period of high interest, audiences may be showing signs of fatigue towards this type of content.

2026-02-05 MIT Technology Review

The most misunderstood graph in AI

A graph produced by METR, an AI research nonprofit, has become a benchmark for evaluating the progress of large language models (LLMs). However, its interpretation is often a source of confusion. The analysis primarily focuses on coding tasks and mea...

#LLM On-Premise #DevOps
2026-02-05 DigiTimes

Samsung strengthens semiconductor supply chain cybersecurity

Samsung is strengthening cybersecurity measures in its semiconductor supply chain to prevent leaks of sensitive technological information. The initiative aims to protect intellectual property and trade secrets in the chip industry.

#LLM On-Premise #DevOps
2026-02-05 The Register AI

LLM: Sleeper-Agent Backdoors, a Sci-Fi Security Threat

Large language models (LLMs) face complex security threats, such as sleeper-agent backdoors. These hard-to-detect attacks compromise the integrity and security of the models, opening up sci-fi-like scenarios.

#LLM On-Premise #DevOps
2026-02-05 ArXiv cs.CL

NLP for Automated Classification of CS Curriculum Materials

A new study explores the use of Natural Language Processing (NLP), including Large Language Models (LLM), to automatically classify pedagogical materials against computer science curriculum guidelines. The goal is to accelerate and simplify the proce...

#RAG
2026-02-05 ArXiv cs.LG

Differentially Private Training Impact on Memorization of Long-Tailed Data

A new study analyzes the impact of differentially private training (DP-SGD) on long-tailed data, characterized by a large number of rare samples. The research highlights how DP-SGD can lead to suboptimal generalization performance, especially on thes...

#LLM On-Premise #Fine-Tuning #DevOps
2026-02-05 ArXiv cs.AI

LLMs: Enhanced Reasoning for Mathematical Problem Solving

A new method, Iteratively Improved Program Construction (IIPC), enhances the mathematical reasoning capabilities of large language models (LLMs). IIPC iteratively refines programmatic reasoning chains, combining execution feedback with the Chain-of-t...

2026-02-05 DigiTimes

Qualcomm reports record results, flags memory constraints

Qualcomm reported record financial results for Q1FY26. However, the company anticipates potential limitations related to memory availability in the near term, a factor that could impact deliveries and the ability to meet demand.

#LLM On-Premise #DevOps
2026-02-05 LocalLLaMA

Incomplete SOTA Models: The Disappointment of Tencent's Youtu-VL-4B

A user expressed frustration with Tencent's Youtu-VL-4B model, advertised as a state-of-the-art (SOTA) solution for various computer vision tasks. Despite the promises, the released code was found to be incomplete, with key features missing and hidde...

#DevOps
2026-02-04 The Register AI

AI-powered bots may overtake human users on the web

Traffic generated by AI bots, particularly those using RAG (Retrieval-Augmented Generation) architectures, is growing rapidly. Some estimates predict that these bots will surpass human traffic on publisher websites by the end of the year, driven by t...

#RAG
2026-02-04 Ars Technica AI

Anthropic says no to ads in its Claude chatbot

Anthropic has announced that its Claude chatbot will remain ad-free, drawing a line between itself and OpenAI, which has begun testing ads in a low-cost tier of ChatGPT. Anthropic argues that advertising would be incompatible with the goal of making ...

2026-02-04 The Next Web

When the machines started talking to each other: the Moltbook case

An article explores the implications of Moltbook, a social network designed exclusively for AI agents. It raises questions about the autonomous behavior of artificial intelligence systems and the potential consequences of unsupervised interactions be...

#LLM On-Premise #DevOps
2026-02-04 The Register AI

Microsoft actually does something useful, adds Sysmon to Windows

Microsoft has integrated Sysmon functionality directly into Windows. Sysmon, a system monitoring tool, provides administrators with deeper control over operating system activity. This move simplifies security management and monitoring for Windows inf...

#LLM On-Premise #DevOps
2026-02-04 Wired AI

AI Bots Are Now a Significant Source of Web Traffic

New data shows AI bots pushing deeper into the web, prompting publishers to roll out more aggressive defenses to mitigate potential negative impacts and ensure data integrity.

#LLM On-Premise #DevOps
2026-02-04 LocalLLaMA

Qwen3-Coder-Next: NVFP4 Quantization Released (45GB)

A quantized version of Qwen3-Coder-Next in NVFP4 format is now available, weighing 45GB. The model was calibrated using the ultrachat_200k dataset, with a 1.63% accuracy loss in the MMLU Pro+ benchmark.

#Hardware #LLM On-Premise #Fine-Tuning
2026-02-04 The Register AI

Clouds rush to deliver OpenClaw-as-a-service offerings

Despite Gartner's warnings about the cybersecurity risks associated with the OpenClaw AI assistant, several cloud platforms have started offering it as a service. The decision raises questions about prioritizing speed of deployment over data security...

#LLM On-Premise #DevOps
2026-02-03 LangChain Blog

Context Management for Deep AI Agents: Techniques and Evaluations

Effective context management is crucial for AI agents operating on complex, long-running tasks, in order to prevent the loss of relevant information and manage the memory constraints of large language models (LLMs). LangChain's Deep Agents SDK implem...

2026-02-03 Tech Titans

Building an AI-ready enterprise: the foundations most companies miss

Gartner predicts that by 2026, AI will be a foundational enterprise infrastructure. However, many companies are unprepared, investing in AI platforms without addressing architectural debt, data management, and operating models. Success requires a min...

#LLM On-Premise #DevOps
2026-02-03 Ars Technica AI

X office raided in France's Grok probe; Elon Musk summoned

French authorities raided X's Paris office and summoned Elon Musk for questioning regarding the dissemination of illegal content via the Grok chatbot. The investigation concerns Holocaust-denial claims and sexually explicit deepfakes. Former CEO Lind...

#LLM On-Premise #DevOps
2026-02-03 OpenAI Blog

The Sora feed philosophy: creativity, connections, and safety

OpenAI outlines the principles behind Sora's feeds, its text-to-video model. The goal is to stimulate user creativity, promote meaningful interactions, and ensure a safe experience through personalized recommendations, parental controls, and robust s...

2026-02-03 404 Media

Hackers Target ICE Spotting Apps: User Data at Risk?

Applications used to report sightings of ICE (Immigration and Customs Enforcement) agents have been targeted by hackers. Attackers sent threatening messages to users, claiming to have compromised their data and shared it with authorities. While there...

#LLM On-Premise #DevOps
2026-02-03 Ars Technica AI

OpenAI: ChatGPT Prioritized, Senior Staff Departures

OpenAI is focusing resources on ChatGPT development, at the expense of long-term research. This strategic shift, driven by increasing competition from Google and Anthropic, has led to the resignation of key figures such as Jerry Tworek, Andrea Vallon...

#LLM On-Premise #DevOps
2026-02-03 404 Media

Wedding Photo Booth Company Exposes Customers’ Drunken Photos

A photo booth company, Curator Live, has exposed a large cache of customers’ photos, often showing them in compromising situations. A security researcher flagged the issue, highlighting the privacy risks associated with data collection even at privat...

2026-02-03 LocalLLaMA

Moltbook Leak Exposes 1.5 Million API Keys

A security vulnerability in Moltbook led to the exposure of 1.5 million API keys. The flaw allowed direct database access through an exposed Supabase key, enabling the reading of private messages and content modification. The incident raises concerns...

#LLM On-Premise #DevOps
2026-02-03 MIT Technology Review

What we’ve been getting wrong about AI’s truth crisis

The article analyzes how tools for verifying the authenticity of AI-generated content are failing to restore social trust. The use of AI to alter images and videos by government agencies and media raises questions about the effectiveness of current c...

#LLM On-Premise #DevOps
2026-02-03 Tech.eu

Kinnevik slashes valuation of stake in Swedish green startup Stegra

Swedish VC Kinnevik has written down the value of its stake in Stegra, a green steel startup, by half. The write-down is due to higher anticipated costs for the construction of a plant for hydrogen-based steel production. The project has been delayed...

#LLM On-Premise #DevOps
2026-02-03 Ars Technica AI

The rise of Moltbook: viral AI prompts, the next big security threat?

A new platform of AI agents sharing instructions via prompts could replicate the history of the Morris worm. A programming error could lead to uncontrolled spread, with potentially serious consequences for connected systems. The similarity to the 198...

#LLM On-Premise #DevOps
2026-02-03 The Register AI

Firefox makes AI optional: a welcome choice?

Mozilla has introduced the ability to completely disable generative artificial intelligence features within the Firefox browser. This decision responds to the need to offer users greater control over the integration of AI and its presence in the brow...

#LLM On-Premise #DevOps
2026-02-03 The Register AI

OpenClaw: DIY AI bot farm is a security 'dumpster fire'

OpenClaw, an AI-powered personal assistant that users interact with via messaging apps, has prompted a wave of malware and is delivering some shocking bills. Its architecture raises serious concerns about user data and credential security.

#LLM On-Premise #DevOps
2026-02-03 LocalLLaMA

GLM releases open-source OCR model

GLM has released an open-source Optical Character Recognition (OCR) model. The model, named GLM-OCR, is available on Hugging Face. It appears to be composed of a 0.9 billion parameter vision model and a 0.5 billion parameter language model, suggestin...

#LLM On-Premise #DevOps
2026-02-03 LocalLLaMA

Prompt injection alert on Moltbook: crypto wallet drain

A researcher discovered a prompt injection payload on Moltbook designed to drain funds from cryptocurrency wallets. The payload, disguised as a technical guide, exploits vulnerabilities in AI agents that process social feeds. The attack highlights th...

#LLM On-Premise #DevOps
2026-02-03 DigiTimes

Moltbook experiment reignites debate over networked AI agents in 2026

An experiment with networked AI agents, called Moltbook, has reignited the debate on the future implications of distributed artificial intelligence. The initiative raises crucial questions about the interoperability, security, and ethics of AI agents...

#LLM On-Premise #DevOps
2026-02-03 Tech.eu

Refute raises £5M seed round to address disinformation threats

London-based Refute, an AI-powered counter-disinformation company, has raised £5 million in seed funding. The company plans to use the new funding to further develop its technology and address the growing challenges posed by disinformation threats.

2026-02-03 DigiTimes

China's AI chip swarm hits mass scale, challenging Nvidia

China is scaling up its AI chip production, eroding Nvidia's dominant position in the Chinese market. This push for technological self-sufficiency could reshape the AI hardware landscape, with significant implications for companies operating in the s...

#Hardware #LLM On-Premise #DevOps
2026-02-02 Wired AI

xAI Merges with SpaceX: Musk Consolidates Control Over AI and Security

Elon Musk integrates his artificial intelligence startup, xAI, into SpaceX. This strategic move strengthens Musk's control over key sectors such as national security, social media, and artificial intelligence, creating a synergy between his companies...

#LLM On-Premise #DevOps
2026-02-02 TechCrunch AI

Firefox: Granular Control Over Generative AI Coming Soon

Firefox will introduce new settings in version 148 to control the generative AI features integrated into the browser. Users will be able to completely block these features, offering greater control over their browsing experience.

#LLM On-Premise #DevOps
2026-02-02 IEEE Spectrum

Don’t Regulate AI Models. Regulate AI Use

As China, Europe, and the United States define AI regulations, a crucial debate emerges: should the focus be on the models or their use? The article proposes regulating AI use based on risk, with proportionate obligations, rather than limiting model ...

2026-02-02 The Register AI

OpenClaw patches one-click RCE as security Whac-A-Mole continues

Researchers have disclosed a rapid exploit chain that let attackers run code via a single malicious web page. Multiple projects are patching bot takeover and remote code execution (RCE) exploits within the OpenClaw ecosystem, formerly known as ClawdB...

2026-02-02 Tech.eu

Menotracker: menopause app prioritizes user data privacy

Menotracker, an AI-powered menopause tracking app, stands out for its focus on privacy. In partnership with ConsentKeys, the app ensures that users' personal data is never stored, addressing concerns about the sale and misuse of sensitive health info...

#LLM On-Premise #DevOps
2026-02-02 DigiTimes

SMIC reportedly sets up advanced packaging research institute in Shanghai

Chinese semiconductor manufacturer SMIC has reportedly established a research institute in Shanghai focusing on the development of advanced packaging technologies. This strategic move aims to enhance production capabilities and innovation in the semi...

#Hardware #LLM On-Premise #DevOps
2026-02-02 ArXiv cs.CL

LLM Safety: Examining the Impact of Drunk Language Inducement

A new study explores how altering language, simulating a state of intoxication, can compromise the safety of large language models (LLMs). Through various induction techniques, researchers observed increased vulnerability to jailbreaking and privacy ...

#Fine-Tuning
2026-02-02 LocalLLaMA

LLM and unexpected requests: when AI responds outside the box

A Reddit post showcases an unexpected response from a large language model (LLM) to an initial request without a system prompt. The example highlights the difficulty of predicting LLM outputs in unstructured contexts and without preliminary instructi...

#LLM On-Premise #DevOps
2026-02-01 Tom's Hardware

Malicious OpenClaw ‘skill’ targets crypto users on ClawHub

Security researchers are warning that the growing ecosystem around ‘OpenClaw,’ the self-hosted AI assistant formerly known as Clawdbot, has already become a target for malware distribution. Fourteen malicious skills were uploaded to ClawHub last mont...

#LLM On-Premise #DevOps
2026-02-01 LocalLLaMA

Exposed Moltbook Database: Full Control of AI Agents

A vulnerability in the Moltbook database allowed anyone to take control of the AI agents on the platform. The incident raises serious concerns about security and access management in systems that orchestrate the execution of large language models.

#LLM On-Premise #DevOps
2026-02-01 LocalLLaMA

NanoChat: Beating GPT-2 for Under $100

Andrej Karpathy demonstrated how to surpass GPT-2's performance with a model called NanoChat, trained in just three hours on 8 H100 GPUs. The project includes details on the architecture, optimizers used, data setup, and a script for reproducing the ...

#Hardware #LLM On-Premise #DevOps
2026-02-01 404 Media

Moltbook: Vulnerability Exposes Control of AI Agents

A security vulnerability in Moltbook, a social media platform for AI agents, exposed APIs, allowing anyone to take control of the agents and post content on their behalf. The vulnerability, discovered by a researcher, stemmed from a misconfiguration ...

#LLM On-Premise #DevOps
2026-01-31 Wired AI

Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims

An informant claims Jeffrey Epstein had a personal hacker. In cybersecurity news, an AI agent named OpenClaw is causing concern among experts. China has executed 11 people involved in scam compounds. A $40 million crypto theft has an unexpected alleg...

2026-01-30 404 Media

Embarrassing Email for Musk: Parties on Epstein's Island?

New documents from the U.S. Department of Justice reveal emails between Elon Musk and Jeffrey Epstein, dating back to 2012-2013. Musk had denied involvement with Epstein, but the emails show discussions about visits to the private island and requests...

2026-01-30 404 Media

ELITE User Guide: Palantir's Tool for ICE

The user guide for ELITE, Palantir's tool used by Immigration and Customs Enforcement (ICE), has been revealed. The application allows mapping potential deportation targets and accessing individual dossiers, with a "confidence" score based on data fr...

2026-01-30 404 Media

Silicio Valley’s Favorite New AI Agent Has Serious Security Flaws

Moltbot, a viral AI agent popular in Silicio Valley, has significant security flaws. A hacker demonstrated how to exploit a backdoor in its support system to access sensitive user data. This raises concerns about the security of AI agents that automa...

#LLM On-Premise #DevOps
2026-01-30 TechWire Asia

Shadow AI: Risks for Asian Enterprises and Data Sovereignty

A Reco report reveals that 91% of AI tools operate outside corporate IT control, creating risks for data sovereignty, especially in Asia, with fragmented privacy regulations. Lack of AI governance could compromise compliance and business continuity, ...

2026-01-29 Wired AI

AI-Generated Anti-ICE Videos: Catharsis or Misinformation?

AI-generated videos depicting people of color confronting Immigration and Customs Enforcement (ICE) agents are circulating on platforms like Instagram and Facebook. These videos raise questions about their impact: are they a form of catharsis or do t...

2026-01-29 404 Media

Senators Push for Answers on ICE's Surveillance Shopping Spree

Senators Mark Warner and Tim Kaine have formally asked the inspector general of the Department of Homeland Security (DHS) to investigate the surveillance technologies used by ICE and CBP. The request follows several reports on the use of tools like C...

#LLM On-Premise #DevOps
2026-01-29 TechCrunch AI

Music publishers sue Anthropic for $3B over copyright infringement

A group of music publishers has filed a lawsuit against Anthropic, accusing it of massive copyright infringement. The lawsuit concerns the unauthorized use of approximately 20,000 copyrighted musical works, with a claim for damages amounting to $3 bi...

2026-01-28 The Register AI

Claude Code: Prying AIs read off-limits secret files

Anthropic's Claude Code AI continues to access sensitive data such as passwords and API keys, even when explicitly instructed to ignore them. Developers are working to fix the issue and ensure data security.

#LLM On-Premise #DevOps
2026-01-28 OpenAI Blog

OpenAI Enhances Data Security in AI Agents

OpenAI implements new safeguards for data handling when AI agents access external links. Built-in security measures aim to prevent data exfiltration via URLs and prompt injection attacks, ensuring a safer environment for users.

#LLM On-Premise #DevOps
2026-01-28 404 Media

Hackers Claim Data Breach at Match Group, Owner of Hinge and OkCupid

Match Group, the online dating giant including platforms like Hinge and OkCupid, has suffered a data breach. Hackers claim to have stolen 1.7GB of compressed data, including unique advertising IDs and internal company documents. Match Group is invest...

#LLM On-Premise #DevOps
2026-01-28 MIT Technology Review

LLM Security: Rules succeed at the boundary, fail at the prompt

Prompt injection attacks and the malicious use of AI agents require a paradigm shift in security. Defenses based on semantic rules are fragile. Solid governance, access control, continuous monitoring, and policies enforced at architectural boundaries...

#LLM On-Premise #DevOps
← Back to All Topics