OpenAI Shifts to Google’s AI Chips, Reducing Dependency on Nvidia - PRESS AI WORLD
PRESSAI
Technology

OpenAI Shifts to Google’s AI Chips, Reducing Dependency on Nvidia

share-iconPublished: Saturday, June 28 share-iconUpdated: Saturday, June 28 comment-icon5 months ago
OpenAI Shifts to Google’s AI Chips, Reducing Dependency on Nvidia

Credited from: REUTERS

  • OpenAI has begun using Google’s AI chips for its products, including ChatGPT.
  • This collaboration signifies a shift away from Nvidia, a primary supplier for OpenAI.
  • Google's tensor processing units (TPUs) are being leveraged to cut inference costs.
  • Google is expanding access to its TPUs, which were previously for internal use only.
  • OpenAI's move represents a diversifying strategy for its chip suppliers.

OpenAI has recently begun renting Google’s artificial intelligence chips to power its ChatGPT and other products, marking a significant shift in its technology strategy. This move represents the first major use of non-Nvidia chips by OpenAI and is seen as an effort to reduce its reliance on Nvidia, one of its largest chip suppliers. The information suggests a growing trend of diversification among AI chip suppliers for OpenAI, which has historically depended on Nvidia's graphics processing units (GPUs) for training and inference computing, according to Channel News Asia and Reuters.

The collaboration with Google follows reports that OpenAI planned to integrate Google’s Cloud service to support its increasing demand for computing capacity. This partnership stands out as it pits two competitors in the AI sector—OpenAI and Google—against each other in a unique way, as OpenAI seeks to meet burgeoning operational demands, according to Reuters and India Times.

Google is expanding the external availability of its tensor processing units (TPUs), which were previously reserved for internal purposes. This initiative has already attracted notable clients like Apple, as well as competitors such as Anthropic and Safe Superintelligence, startups led by former OpenAI executives. Google’s approach reflects a strategy of leveraging its proprietary technology to enhance its Cloud business, according to Channel News Asia and India Times.

OpenAI’s pivot to using Google’s TPUs suggests it aims to lower the costs associated with inference, which involves AI models applying learned knowledge to new datasets. However, it is noteworthy that Google is not providing its most powerful TPUs to OpenAI, reflecting the competitive nature between the two companies, according to Reuters, Channel News Asia, and India Times.

SHARE THIS ARTICLE:

nav-post-picture
nav-post-picture