Credited from: DAWN
US tech company Anthropic has officially stopped Chinese-run companies and organizations from accessing its artificial intelligence services, citing security concerns and legal frameworks associated with "authoritarian regions." The San Francisco-based firm, known for its Claude chatbot and heavily backed by Amazon, stated that firms in countries such as China, Russia, North Korea, and Iran already face restrictions due to these national security issues, according to Dawn and India Times.
In a recent update to its terms of service, Anthropic has specified that companies subjected to control from these jurisdictions cannot use its services, regardless of their operational locations. The company emphasized that previous loopholes allowed entities to access their offerings through subsidiaries incorporated outside of the restricted areas. Consequently, the new update aims to tighten these access points, as indicated in their official post, according to TRT Global and India Times.
This unprecedented prohibition represents a significant shift in the strategy of a leading US AI provider. Nicholas Cook, a legal expert specializing in the AI sector, noted that the impact on revenues may range in the "low hundreds of millions of dollars," suggesting that the actual financial implications could be limited since US AI firms are already facing barriers in these markets. Nonetheless, this move could provoke contemplation among other US tech companies about adopting similar policies, according to Dawn, TRT Global, and India Times.
Despite the bans, some users in China reportedly still access US generative AI platforms like ChatGPT and Claude via VPNs, underscoring the challenges in enforcing such restrictions in the digital age. Additionally, Anthropic has recently raised $13 billion in funding and reported a substantial increase in business customers, showing their robust growth trajectory amid such controversies, according to Dawn and TRT Global.