Advertisement
Wired

Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

In response to Anthropic’s lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military.

W

Source

Wired

Read full article at Wired

Opens original article in a new tab

Advertisement

Related Tech Stories

TechDaily Composite
MIT Tech Review

The Pentagon is planning for AI companies to train on classified data, defense official says

The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned.  AI models like Anthropic’s Claude are already used to answer questions in classified settings, including for analyzing targets in Iran. But allowing models to train on…

Read more →
Advertisement