Anthropic’s 3.5 GW Google TPU Deal: 7 Things to Know About the 2027 Compute Bet
Anthropic signed a three-way agreement with Google and Broadcom for approximately 3.5 gigawatts of next-generation Google TPU capacity coming online from 2027, on top of the 1 GW already deploying in 2026. The deal coincides with Anthropic’s run-rate revenue passing $30 billion, up from $9 billion at the end of 2025 — but Broadcom’s SEC filing quietly notes that consumption of the expanded capacity is conditional on Anthropic’s continued commercial success.
On April 6, 2026, Broadcom filed a one-page 8-K with the U.S. Securities and Exchange Commission. The language was routine. The substance was not. Inside that single filing, three of the most important companies in the AI infrastructure stack confirmed a deal whose scale was, until recently, unimaginable: Anthropic will gain access to roughly 3.5 gigawatts of custom Google TPU compute, designed and supplied by Broadcom, starting in 2027. Broadcom has been Google’s behind-the-scenes silicon partner since 2016, quietly designing TPUs while Google handled the commercial front, and this agreement formalises that relationship for the next six years.
For context: Broadcom CEO Hock Tan had noted on an earnings call just last month that Anthropic was consuming around one gigawatt of compute in 2026. The new commitment is roughly 4.5× larger. To put 3.5 GW into a unit you can feel: that is enough electricity to power a mid-sized European country, dedicated entirely to running and training Claude.
This article unpacks what the deal actually contains, the financial caveat almost no one quoted, and what it tells us about how the frontier AI race will be fought from 2027 onward. If you want background on Anthropic’s broader infrastructure footprint, our coverage of Project Glasswing and the Capybara/Mythos model roadmap sets the broader picture.
What exactly did Anthropic, Google and Broadcom announce?
The 8-K bundled two distinct items into one regulatory event. First: a “Long Term Agreement for Broadcom to develop and supply custom Tensor Processing Units (“TPUs”) for Google’s future generations of TPUs.” Second: an expanded three-way arrangement under which Anthropic gains access to roughly 3.5 GW of those TPUs, routed through Broadcom as the implementation partner.
The structure is unusual. Google designs the TPU architecture and owns the commercial cloud relationship. Broadcom handles the silicon implementation, the networking, and the rack-level integration. Anthropic is the end customer, training and serving Claude on the resulting hardware. The vast majority of the new infrastructure will be located in the United States, extending the $50 billion American AI infrastructure commitment Anthropic made in November 2025.
| Component | Details |
|---|---|
| Capacity | ~3.5 GW of next-generation TPU compute |
| Already in flight | 1 GW coming online in 2026 (announced October 2025) |
| Total potential | Up to ~5 GW combined |
| Start date | 2027 |
| Broadcom–Google supply agreement | Runs through 2031 |
| Geography | Majority in the United States |
| Estimated Broadcom AI revenue from Anthropic | $21B in 2026, $42B in 2027 (Mizuho estimate) |
The deal will give Anthropic access to up to 5 GW of next-generation TPU capacity, massively expanding its compute infrastructure for frontier Claude models. To make the moving parts visible, here is the three-way structure as a single diagram:
Why does 3.5 gigawatts matter for frontier AI?
Gigawatts have quietly replaced parameter counts as the unit of frontier-AI ambition. A 3.5 GW commitment is not a procurement decision — it is an industrial-policy decision. It locks in years of grid capacity, substation construction, water rights, and skilled-labour pipelines. Once the contract is signed, the constraint moves from “can we afford the chips?” to “can the local power utility deliver on time?”
For Anthropic specifically, the new capacity has three immediate consequences. First, it hedges against single-vendor risk: Claude will continue to run across AWS Trainium, Google TPUs, and NVIDIA GPUs — which means we can match workloads to the chips best suited for them. Second, it creates the headroom to train successor models to Claude Opus 4.6 and Sonnet 4.6 without throttling inference for paying customers. Third, it gives Anthropic a credible pitch to enterprise buyers worried about whether their AI provider will still have capacity in 18 months.
If you build on the Anthropic API, the 2027 capacity is the answer to a question you may not have asked yet: “what happens when my workload doubles every six months?” The honest answer until April 2026 was “hope”. The honest answer now is “Anthropic has pre-committed to 4.5× the compute it has today”. That changes how you plan multi-year roadmaps that depend on Claude.
How can Anthropic justify a multi-gigawatt commitment financially?
The deal would look reckless without the revenue numbers Anthropic released the same day. The company’s run rate revenue is now $30 billion, the company announced, marking a drastic jump from the $9 billion the company recorded at the end of 2025. That is a 3.3× increase in roughly four months — a growth curve that, if it persists, makes the 2027 capacity look conservative rather than aggressive.
The customer concentration story is equally striking. “When we announced our Series G fundraising in February, we shared that over 500 business customers were each spending over $1 million on an annualized basis. Today that number exceeds 1,000, doubling in less than two months.” Doubling million-dollar contracts in eight weeks is the kind of metric that makes infrastructure pre-buys defensible to a board.
It also helps explain the analyst projections. Analysts at Mizuho, led by Vijay Rakesh, estimated that Broadcom would record $21 billion in AI revenue from Anthropic in 2026 and $42 billion in 2027. Those numbers, if they hold, would make Anthropic one of Broadcom’s largest customers full stop — not just in AI.
What is the one sentence in the SEC filing that everyone missed?
This is the part of the story the press release does not lead with. Buried in the same 8-K that announced the 3.5 GW commitment, Broadcom included a sentence that pushes back gently against the headline number:
Two things are happening in that sentence. First, Broadcom is signalling that the 3.5 GW is a ceiling, not a floor — Anthropic has to keep growing to actually draw down all of it. Second, the financing is not yet in place. That is a public company, in a regulatory document, signalling that the financial architecture behind a multi-gigawatt deployment is not yet fully settled. It does not undercut the deal, but it does mean the 3.5 gigawatts figure is conditional, not guaranteed.
This is consistent with Broadcom’s broader posture. Broadcom CEO Hock Tan recently shared his opinion that hyperscalers don’t have the skill to create custom accelerators and predicted Broadcom’s chip business will therefore win over $100 billion of revenue from AI chips in 2027 alone. A company chasing a $100B AI silicon target needs to be seen taking large bets — but it also needs to insulate itself if any one of those bets stalls.
The 3.5 GW number is not a guarantee. It is a cap that Anthropic can reach if revenue keeps compounding and if the financing partners come together. Treat it the way you would treat a venture term sheet, not a delivered contract.
Why is Anthropic running Claude on three different chip platforms?
Anthropic is the only frontier lab actively training and serving on three distinct silicon stacks: AWS Trainium, Google TPU, and NVIDIA GPU. The TPU expansion does not change that — it deepens it. Anthropic trains and runs its Claude models across multiple hardware platforms — including AWS Trainium, Google TPUs, and Nvidia GPUs — and describes Amazon Web Services as its primary cloud and training partner.
The strategic logic is straightforward. Different workloads have different optimal hardware. TPUs excel at large dense matrix multiplications and have unusually fast inter-chip interconnects, which suits training runs at the largest scale. NVIDIA GPUs dominate when you need flexible kernel programming and the largest available software ecosystem. Trainium is increasingly competitive on cost-per-token for inference. Owning capacity across all three lets Anthropic route each workload to the chip where it is cheapest or fastest.
The defensive logic is equally important. Single-vendor dependency in AI silicon is now a board-level risk. The Pentagon’s recent designation of Anthropic as a supply-chain risk — discussed in our piece on Anthropic’s third-party tool restrictions — only reinforces the case for hardware diversification. If any one supplier had a delivery slip, a regulatory dispute, or an export-control issue, Claude availability would be at risk for hundreds of enterprise customers paying seven figures a year.
What does this mean for Broadcom’s role in the AI stack?
Broadcom is quietly becoming the implementation layer for two of the three largest U.S. frontier model developers. The same division of labor underpins Broadcom’s separate $10 billion custom silicon program with OpenAI, announced as a 10 GW co-development effort last October, which makes Broadcom the implementation layer for two of the three largest U.S. frontier model developers. Google designs, Broadcom builds, the AI labs consume — and Broadcom collects margin on every gigawatt that ships.
The new long-term agreement with Google extends this further. It covers not just future TPU generations but networking, optical interconnects, and the rack-level components that turn individual chips into training clusters. Broadcom is not just a chip designer in this arrangement. It is becoming a full-stack infrastructure partner, supplying the interconnects and components that tie these systems together at scale.
What should you watch over the next 18 months?
The 2027 deployment date is far enough away that several things can still go right or wrong. Three signals worth tracking:
1. Anthropic’s run-rate revenue trajectory. The 3.5 GW figure is conditional on continued commercial success. If the run-rate goes from $30B to $50B by year-end, the deal looks small. If it stalls at $35B, the conditional clause starts to matter.
2. The financial partners. Broadcom’s filing references operational and financial partners still in discussions. Watch for announcements involving sovereign wealth funds, infrastructure-focused private equity, or REIT-style data center vehicles. The structure of the financing will tell you how much risk Broadcom and Anthropic are willing to retain on their own balance sheets.
3. Power and grid commitments. 3.5 GW is not a chip story — it is a substation story. The U.S. utility filings around the chosen sites will show up months before any silicon is racked. If you see new 500 kV transmission applications in Texas, Virginia, or Iowa from utilities serving Google or Anthropic data centers, that is the deal becoming concrete.
How does this fit into the broader 2026 AI regulatory landscape?
The deal lands in the middle of a complicated regulatory year. The EU AI Act is moving into its full enforcement window for general-purpose models, and U.S. export controls on AI chips remain in flux. By concentrating the build-out in the United States, Anthropic and Google are betting that domestic capacity will be the safest political ground regardless of which way export rules tilt.
There is also a competitive-dynamics angle. With OpenAI separately committed to 6GW of AMD GPU capacity, with the first gigawatt expected in the second half of this year, plus its own Broadcom co-development, the frontier is now defined by who has secured the most pre-built power and silicon for 2027–2028. AI capability is starting to look less like a software race and more like a heavy-industry race.
What is the one-paragraph summary?
Anthropic has signed the largest single compute commitment in its history: roughly 3.5 GW of next-generation Google TPUs, designed by Google, implemented by Broadcom, and consumed by Claude starting in 2027. The deal is justified by a $30B run-rate revenue and 1,000+ million-dollar customers. It is hedged by a quiet SEC clause making the full 3.5 GW conditional on Anthropic continuing to grow and on financing partners coming together. And it sits inside a multi-platform strategy that keeps Claude running on AWS Trainium, NVIDIA GPUs, and Google TPUs simultaneously — because in 2026, the biggest risk to a frontier AI company is not picking the wrong chip, it is being unable to ship at all.
FAQ
How big is the Anthropic Google TPU deal exactly?
Anthropic gains access to approximately 3.5 gigawatts of next-generation Google TPU capacity coming online from 2027, on top of the 1 GW already deploying in 2026 — for a combined potential of up to ~5 GW. The agreement was disclosed in a Broadcom SEC 8-K filing on April 6, 2026.
Why is Broadcom involved in an Anthropic–Google deal?
Broadcom has been Google’s silent silicon partner on TPUs since 2016. Google designs the TPU architecture and owns the cloud relationship; Broadcom implements the chips, the networking, and the rack-level integration. In the new arrangement, Broadcom is the implementation layer that physically delivers the 3.5 GW of capacity Anthropic will consume.
Is the 3.5 GW figure guaranteed?
No. Broadcom’s SEC filing explicitly states that consumption of the expanded capacity depends on Anthropic’s continued commercial success, and that financing discussions with operational and financial partners are still underway. The 3.5 GW is best understood as a conditional ceiling, not a delivered contract.
What is Anthropic’s current revenue?
Anthropic disclosed on April 7, 2026 that its annualized run-rate revenue had passed $30 billion, up from approximately $9 billion at the end of 2025. The company also reported more than 1,000 business customers each spending over $1 million per year on its services — double the figure from February 2026.
Is Anthropic abandoning AWS or NVIDIA for Google TPUs?
No. Anthropic continues to train and serve Claude across AWS Trainium, Google TPUs, and NVIDIA GPUs simultaneously. AWS remains its primary cloud and training partner, and Project Rainier with AWS continues. The Google TPU expansion adds capacity rather than replacing existing platforms.
How much revenue could Broadcom earn from this deal?
Mizuho analysts led by Vijay Rakesh estimated, following Broadcom’s March 2026 earnings call, that Broadcom could record approximately $21 billion in AI revenue from the Anthropic relationship in 2026, rising to roughly $42 billion in 2027. The SEC filing itself did not contain specific dollar amounts.
Where will the new compute be located?
The vast majority of the new infrastructure will be built in the United States. Anthropic frames the deal as a continuation of its November 2025 commitment to invest $50 billion in American AI infrastructure. Specific data center sites have not been publicly disclosed.
Sources & further reading
- Anthropic — Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute (official announcement, April 7, 2026)
- Broadcom — SEC Form 8-K filing, April 6, 2026 (long-term TPU supply agreement and Anthropic deployment)
- Tom’s Hardware — Broadcom to supply Anthropic with 3.5 gigawatts of Google TPU capacity from 2027
- The Register — Anthropic reveals $30bn run rate, plan to use new Google TPU
- TechCrunch — Anthropic ups compute deal with Google and Broadcom amid skyrocketing demand
- TechWire Asia — Broadcom’s custom AI chips deal with Google and Anthropic explained
- European Commission — EU AI Act regulatory framework