Google to Invest Up to $40 Billion in Anthropic at $350B Valuation — Adds 1M Ironwood TPUs and 5 GW of Compute (April 2026)
Alphabet will commit $10 billion now and a further $30 billion if performance targets are hit, deepening Google's bet on its biggest model rival. The deal adds 5 gigawatts of Google Cloud capacity and access to up to one million seventh-generation Ironwood TPUs.
Google parent Alphabet on announced a deal to invest up to $40 billion in Anthropic, with $10 billion committed immediately at a $350 billion post-money valuation and another $30 billion to follow if Anthropic hits unspecified performance milestones. The agreement also gives Anthropic access to 5 gigawatts of new Google Cloud capacity over the next five years, including up to one million seventh-generation Ironwood TPU chips, according to reports first published by Bloomberg.
What Happened
Google's commitment is structured in two tranches. The first tranche, $10 billion in cash and compute credits, closes immediately at a $350 billion valuation — the same valuation Anthropic used in its February 2026 round, despite secondary-market trades since reportedly clearing two to three times that level. The second tranche, up to $30 billion more, is contingent on Anthropic meeting performance and revenue milestones that neither company has publicly disclosed.
The compute side of the deal is the part Anthropic emphasised. Google Cloud will add 5 GW of capacity for Anthropic over five years, on top of the 3.5 GW Broadcom-Google TPU partnership Anthropic announced earlier this month. The new capacity will be served largely on Google's seventh-generation Ironwood TPU, which Google made generally available on April 22 and which Doolpa covered here. Anthropic CFO Krishna Rao said in a statement that the partnership gives the company "the compute headroom we need to keep training frontier models without throttling capacity for paying customers."
Key Details
- Initial cheque: $10B at $350B valuation. Same valuation as Anthropic's February 2026 priced round, despite reports of secondary trades implying $700B+.
- Performance-tied: $30B more. Released in tranches if Anthropic hits internal revenue and model-performance targets that neither party has disclosed.
- 5 GW of Google Cloud capacity. Layered on top of the existing 3.5 GW TPU agreement, taking Anthropic's contracted Google capacity past 8 GW once both deals are fully delivered.
- Up to 1M Ironwood TPUs. Google's seventh-generation chip, which became generally available on and which Google now positions as a direct alternative to Nvidia GB200/GB300 systems.
- Multi-cloud, not exclusive. The deal is non-exclusive; Anthropic separately disclosed an additional $5 billion commitment from Amazon earlier this month and a multi-billion-dollar CoreWeave capacity deal.
What Developers and Users Are Saying
Reaction on Hacker News was mostly skeptical. The top-voted comment notes that "Google and AWS are already over-subscribed waiting for new data centres to come online — these deals don't change total AI compute available over the next 18 months, they just redistribute who has dibs on it once it ships." A second highly-upvoted thread points out the valuation gap: "On the secondary market the valuation is 2-3× the $350B sticker price. Google is buying in cheap because Anthropic needs the chips, not the cash."
On r/MachineLearning and r/singularity, the dominant framing is that Anthropic is now functionally locked into the Google Cloud + TPU stack — the cost of porting model training and inference off TPUs to Nvidia or AWS Trainium has grown with each successive generation of internal tooling. Several Anthropic researchers, including Sam Bowman, posted on X downplaying the lock-in concern and emphasising that the deal is "strictly capacity, not exclusivity." Customer-facing developers, meanwhile, are mostly hoping the new capacity finally relieves the rate-limit complaints that have dominated Anthropic's status page over the past month.
What This Means for Developers
For teams building on Claude, the practical near-term effect is more headroom against rate limits and fewer 529 errors. Anthropic's API has been visibly capacity-constrained since the Claude Mythos limited release in late March, and the new GW commitments — plus the existing CoreWeave and Amazon deals — are explicitly aimed at that bottleneck. Developers should expect higher token-per-minute ceilings on the API and Bedrock over the coming quarters, though Anthropic has not committed to specific changes yet.
Strategically, the deal cements Google as the primary infrastructure provider behind every frontier-lab competitor to Gemini except OpenAI. Builders who target portability across Claude, Gemini, and OpenAI APIs through routers like OpenRouter or LiteLLM are unaffected; teams who locked their stack to one provider should at least know which TPU generation their inference is running on.
What's Next
The first tranche of capital is expected to close in Q2 2026 once regulatory review clears. The Ironwood TPU deployments will roll out in stages through 2030 as Google brings new data-centre regions online — the company has flagged sites in Iowa, Belgium, and Singapore as priority builds for AI workloads. Anthropic's next major model release, expected mid-2026, will be the first frontier model trained primarily on Ironwood silicon end-to-end. Watch the Anthropic news page and Google Cloud blog for the formal closing announcement.
Sources
- Bloomberg — broke the $40B/$350B valuation structure on April 24, 2026.
- TechCrunch — confirmed the cash + compute split and CoreWeave/Amazon context.
- CNBC — reported the 5 GW capacity figure and Google's spread-bet framing.
- Axios — placed the deal in the broader Big Tech AI infrastructure context.
- Hacker News thread — developer and infrastructure analyst reactions.
- PYMNTS — cumulative Google-into-Anthropic investment running total.
Stay up to date with Doolpa
Subscribe to Newsletter →