News

Anthropic’s AI Safety Level 3 protections add a filter and limited outbound traffic to prevent anyone from stealing the ...
Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January.
The internet freaked out after Anthropic revealed that Claude attempts to report “immoral” activity to authorities under ...
They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to ...
Anthropic has announced the release of a new set of AI models specifically designed for use by US national security agencies.
Anthropic announced Thursday that it is releasing Claude Gov to U.S. national security customers, an exclusive set of ...
Anthropic which released Claude Opus 4 and Sonnet 4 last week, noted in its safety report that the chatbot was capable of ...
Anthropic released Claude Opus 4 and Sonnet 4, the newest versions of their Claude series of LLMs. Both models support ...
A proposed 10-year ban on states regulating AI 'is far too blunt an instrument,' Amodei wrote in an op-ed. Here's why.
Malicious use is one thing, but there's also increased potential for Anthropic's new models going rogue. In the alignment section of Claude 4's system card, Anthropic reported a sinister discovery ...
Anthropic’s new Claude Opus 4 often turned to blackmail ... attempt blackmail to preserve its existence. In a new safety report for the model, the company said that Claude 4 Opus “generally ...
Reddit’s lawsuit, which was filed Wednesday, accuses the Amazon-backed AI company of breaching its user agreement. “As far ...