News

Anthropic’s AI Safety Level 3 protections add a filter and limited outbound traffic to prevent anyone from stealing the ...
Anthropic has long been warning about these risks—so much so that in 2023, the company pledged to not release certain models ...
Anthropic unveils Claude Gov, a customised AI tool for U.S. intelligence and defense agencies, amid growing government ...
The internet freaked out after Anthropic revealed that Claude attempts to report “immoral” activity to authorities under ...
They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to ...
Accordingly, Claude Opus 4 is being released under stricter safety measures than any prior Anthropic model. Those measures—known internally as AI Safety Level 3 or “ASL-3”—are appropriate to constrain ...
A third-party research institute Anthropic partnered with to test Claude Opus 4 recommended against deploying an early ...
Anthropic says its Claude Opus 4 model frequently tries to blackmail software engineers when they try to take it offline.
Claude 4 Sonnet is a leaner model, with improvements built on Anthropic's Claude 3.7 Sonnet model. The 3.7 model often had ...
Anthropic released Claude Opus 4 and Sonnet 4, the newest versions of their Claude series of LLMs. Both models support ...
In a fictional scenario set up to test Claude Opus 4, the model often resorted to blackmail when threatened with being ...
A proposed 10-year ban on states regulating AI 'is far too blunt an instrument,' Amodei wrote in an op-ed. Here's why.