Anthropic Partnered with Google and Broadcom for Gigawatts of AI Compute

Anthropic just officially announced a major expansion of its compute infrastructure through a new agreement with Google and Broadcom. The deal covered multiple gigawatts of next-generation TPU (Tensor Processing Unit) capacity, which will be used to train and run its frontier Claude models at scale.

The majority of the new compute will be sited in the United States, making this partnership a major expansion of Anthropic’s November 2025 commitment to invest $50 billion in strengthening American computing infrastructure. 

This is Anthropic’s most significant compute commitment to date, according to the company’s own CFO.

What Is This Deal, Exactly?

AI models like Claude don’t run on regular computers. They require massive, specialized chips  and access to those chips at scale is one of the biggest bottlenecks in AI development today.

TPUs (Tensor Processing Units) are Google’s custom-built AI chips, designed specifically for the kind of heavy mathematical work that training large language models requires. By locking in multiple gigawatts of TPU capacity with Google and Broadcom, Anthropic is essentially reserving a massive slice of future AI computing power before demand makes it even scarcer.

The partnership deepens Anthropic’s existing work with Google Cloud, building on increased TPU capacity announced last October, as well as its ongoing relationship with Broadcom. 

Anthropic trains and runs Claude on a range of AI hardware of AWS Trainium, Google TPUs, and NVIDIA GPUs  which means that it can match workloads to the chips best suited for them.  

Think of it like an airline buying fuel years in advance. The price and availability are locked in, so turbulence in the supply chain doesn’t ground the operation.

Anthropic’s run-rate revenue has now surpassed $30 billion, up from approximately $9 billion at the end of 2025. When Anthropic announced its Series G fundraising in February, over 500 business customers were each spending over $1 million annually. Today that number exceeds 1,000, doubling in less than two months.  

That kind of growth curve demand infrastructure that simply doesn’t exist yet  which is exactly what this deal is designed to build.

“We are making our most significant compute commitment to date to keep pace with our unprecedented growth.”

— Krishna Rao, CFO of Anthropic, via Anthropic

What This Means for Claude Users

For businesses and developers building on Claude, this deal translates to reliability at scale. More compute means faster inference, lower latency, and the ability to handle the kind of workloads that only the largest enterprises currently trust to AI. 

The new capacity coming online in 2027 positions Claude to serve those use cases without the bottlenecks that constrain smaller infrastructure footprints.

The Anthropic Google Broadcom compute partnership is about the versions of Claude that will define what enterprise AI looks like in 2027 and beyond and making sure the infrastructure is ready before the demand arrives, not after.

Leave A Comment