Credited from: NPR
A federal judge has blocked the Trump administration's efforts to label Anthropic, a prominent AI firm, as a "supply chain risk," which would have cut off federal contracts with the company. U.S. District Judge Rita Lin ruled that the government's actions were "likely both contrary to law and arbitrary and capricious," thereby granting Anthropic's request for a preliminary injunction to halt the enforcement of this designation, aimed at restricting the use of its AI technology, Claude, by federal agencies, according to CBS News and Le Monde.
Judge Lin emphasized in her decision that the government's designation of Anthropic not only blocked its technology from being used by the Pentagon but also mandated that defense contractors avoid using the company's products altogether, which she found troubling. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," she stated, highlighting First Amendment concerns, as reported by BBC and NPR.
This legal dispute arose after Anthropic voiced concerns over the military's potential use of its technology for purposes that could include mass surveillance and autonomous weapons. The Pentagon had previously praised Anthropic as a strategic partner but responded defensively when the company publicly raised these issues, leading to retaliatory measures, according to Le Monde and CBS News.
Following the ruling, Anthropic's CEO, Dario Amodei, expressed gratitude for the decision, emphasizing that the company aimed to maintain a constructive relationship with the government while advocating for responsible AI usage. The government's actions, which include a directive from President Trump to cease using Anthropic's technology immediately, can still be appealed, with a seven-day window for the Pentagon to respond, as highlighted by BBC and NPR.