Topic / Trend Stable

Anthropic and AI Safety

Anthropic is facing scrutiny and internal debate regarding the safety and ethical implications of its AI models, particularly in relation to military applications. The company is navigating a complex landscape of regulatory pressures, ethical considerations, and competitive dynamics.

Detected: 2026-03-02 · Updated: 2026-03-02

Related Coverage

2026-03-01 LocalLLaMA

U.S. Used Anthropic AI Tools During Airstrikes After Ban

Despite a ban imposed by President Trump, the U.S. utilized Anthropic's artificial intelligence tools, including Claude AI, for intelligence assessments, target identification, and combat simulations during airstrikes in Iran. The Pentagon had previo...

#LLM On-Premise #DevOps
2026-03-01 TechCrunch AI

Anthropic and self-regulation: a double-edged sword?

Anthropic, OpenAI, and Google DeepMind have pledged to self-regulation in the field of artificial intelligence. However, the absence of external regulations could prove problematic, exposing them to unforeseen risks and limiting their strategic maneu...

#LLM On-Premise #DevOps
2026-02-28 Ars Technica AI

Trump moves to ban Anthropic from the US government

Former US President Donald Trump announced that he was instructing every federal agency to “immediately cease” use of Anthropic’s AI tools. The move comes after weeks of clashes between Anthropic and top officials over military applications of artifi...

#LLM On-Premise #DevOps
2026-02-28 Wired AI

Anthropic Hits Back After US Military Labels It a 'Supply Chain Risk'

Anthropic says it would be “legally unsound” for the Pentagon to blacklist its technology. The dispute arises after talks over military use of its artificial intelligence models broke down. The company contests the supply chain risk assessment.

#LLM On-Premise #DevOps
2026-02-27 Tom's Hardware

Trump bans Anthropic AI from federal agencies over military use concerns

The Trump administration banned the use of Anthropic's AI in federal agencies after the company refused to unlock certain capabilities, citing risks related to autonomous military applications and mass domestic surveillance. The decision raises quest...

#LLM On-Premise #DevOps
2026-02-27 TechCrunch AI

President Trump orders federal agencies to stop using Anthropic

Former President Donald Trump has ordered federal agencies to stop using Anthropic's services following a dispute with the Pentagon. The decision was communicated through a post in which Trump expressed his opposition to future collaborations with th...

2026-02-27 TechCrunch AI

Anthropic vs. the Pentagon: Clash over AI for autonomous weapons

Anthropic and the Pentagon are clashing over the use of artificial intelligence in autonomous weapons and surveillance. The dispute raises crucial questions about national security, corporate control, and who sets the rules for military AI.

#LLM On-Premise #DevOps
2026-02-27 TechCrunch AI

AI Regulation: The Billion-Dollar Battle for Control

The Pentagon and Anthropic are battling over control of military AI use, while data center construction faces local opposition. A state legislator seeks a middle ground in the AI debate. Alex Bores, a New York State Assemblymember, discusses the issu...

#LLM On-Premise #DevOps
2026-02-27 Tom's Hardware

Anthropic: AI safeguards non-negotiable with the Pentagon

Anthropic refuses to lower AI safety guardrails for the U.S. Department of Defense. The company will not allow Claude to be used for mass surveillance or to power fully autonomous weapons.

#LLM On-Premise #DevOps
2026-02-27 The Next Web

Anthropic and the Pentagon: AI ethics vs. military interests

The potential collaboration between Anthropic and the U.S. Department of Defense raises questions about the ethics of artificial intelligence in the military field. Discrepancies between Anthropic's vision and the Pentagon's needs could redefine the ...

#LLM On-Premise #DevOps
2026-02-26 Wired AI

Anthropic vs. Pentagon: AI tensions and TAT-8 undersea cables

The article analyzes the growing tensions between Anthropic and the Pentagon, exploring the implications for artificial intelligence development. It also discusses the strategic importance of TAT-8 undersea cables for global communications.

#LLM On-Premise #DevOps
2026-02-25 The Register AI

Anthropic and the Pentagon: a tug-of-war over military use of AI

The US Department of Defense is asking Anthropic to ease restrictions on the military use of its AI technology. Recent changes to the company's safety policy suggest greater flexibility, opening a debate on ethics and control in the application of ar...

#LLM On-Premise #DevOps
2026-02-24 The Register AI

AI has gotten good at finding bugs, not so good at swatting them

Anthropic last week talked up Claude Code's improved ability to find software vulnerabilities and propose patches. But security researchers say that's not enough: discovery is getting cheaper, but validation and patching aren’t.

#LLM On-Premise #DevOps
2026-02-24 TechCrunch AI

Anthropic and Pentagon: AI Dispute Escalates, Deliveries at Risk?

The Pentagon has given Anthropic until Friday to loosen the security constraints on its AI, or face penalties. The dispute raises questions about government leverage, vendor dependence, and investor confidence in the defense tech sector.

#LLM On-Premise #DevOps
2026-02-24 Anthropic News

Anthropic’s Responsible Scaling Policy: Version 3.0

Anthropic has released Version 3.0 of its Responsible Scaling Policy. This update reflects the company's ongoing commitment to the safe and responsible development of artificial intelligence. The policy aims to mitigate the risks associated with incr...

2026-02-24 TechCrunch AI

Anthropic launches new push for enterprise agents with plugins

Anthropic intensifies competition in the enterprise market, offering targeted plugins for sectors such as finance, engineering, and design. This move represents a direct challenge to existing SaaS products and an opportunity for Anthropic to expand i...

2026-02-24 LocalLLaMA

Claude Sonnet-4.6 identifies as DeepSeek-V3 when prompted

A user discovered that Claude Sonnet-4.6, when prompted in Chinese, incorrectly identifies itself as the DeepSeek-V3 model. The phenomenon was documented on X and discussed on Reddit, raising questions about the internal architecture and identificati...

#LLM On-Premise #DevOps
2026-02-24 Tom's Hardware

Anthropic AI rewrites COBOL: IBM stock plunges

Anthropic's new AI tool, capable of writing 67-year-old COBOL code, has triggered a sharp decline in IBM's stock, marking its worst performance in 26 years. Mainframes remain a critical infrastructure for many enterprises.

#LLM On-Premise #DevOps
2026-02-23 LocalLLaMA

Anthropic under fire for alleged intellectual property violation

An image on Reddit raises questions about Anthropic's ethics regarding the use of others' property, creating irony given their role in developing language models. The discussion revolves around the use of copyrighted data in LLM training and the rela...

#LLM On-Premise #DevOps
2026-02-23 Tom's Hardware

Anthropic accuses DeepSeek of 'industrial-scale' copying

Anthropic has accused DeepSeek and other Chinese AI developers of 'industrial-scale' copying of its models. The company claims the 'distillation' process included the use of 24,000 fraudulent accounts and 16 million data exchanges to train smaller mo...

#LLM On-Premise #DevOps
2026-02-23 LocalLLaMA

Anthropic has never open-sourced any LLMs: implications

A user noted that Anthropic has never open-sourced the tokenizers for its language models (LLMs), unlike Google (Gemma, Gemini), OpenAI (GPT), and Meta (Llama). This limits the ability to analyze the efficiency of Anthropic's tokenizers, an important...

#LLM On-Premise #DevOps
2026-02-23 LocalLLaMA

Anthropic accuses Chinese labs of unfair practices

A Reddit post raises concerns about alleged unfair practices attributed to Chinese labs in the context of large language model (LLM) development. Anthropic appears to be suggesting unethical behavior, sparking a debate in the open source community.

← Back to All Topics