Google Locks In Thinking Machines Lab With New Multi-Billion-Dollar Cloud Deal (April 2026)
Mira Murati's Thinking Machines Lab has signed a new multi-billion-dollar deal with Google Cloud for AI infrastructure powered by Nvidia's GB300 chips. The expanded tie-up follows a $2B seed and a separate Nvidia compute commitment — and Google's stock rallied on the news.
Former OpenAI CTO Mira Murati's startup Thinking Machines Lab has signed a new multi-billion-dollar agreement with Google Cloud for AI infrastructure powered by Nvidia's next-generation GB300 chips, TechCrunch reported on . The deal, valued in the single-digit billions, makes Thinking Machines one of the first customers on Google's GB300-powered systems and sends a clear signal that hyperscalers are now racing to lock in frontier AI labs at any price.
What Happened
Google Cloud confirmed the partnership to TechCrunch in an exclusive on , describing a multi-year commitment that expands Thinking Machines' existing use of Google's infrastructure. The arrangement is not exclusive — Thinking Machines may continue to buy capacity from other providers — but it is the second major compute deal the lab has signed in seven weeks, following a gigawatt-scale Nvidia compute commitment on March 10, 2026.
According to Google, Thinking Machines will use GB300-powered AI supercomputers, Google Kubernetes Engine, Cloud Storage and Spanner to train and serve its models. "Google Cloud got us running at record speed with the reliability we demand," said founding researcher Myle Ott in a statement shared with TechCrunch. Google said its GB300 systems deliver a 2× speed-up on both training and inference compared with the prior-generation GPUs.
Key Details
- Deal size: "Single-digit billions" of dollars, per TechCrunch's sources — exact figure undisclosed, multi-year term.
- Silicon: First-wave access to Google Cloud's Nvidia GB300-powered clusters, plus Spanner, GKE and Cloud Storage.
- Not exclusive: Thinking Machines retains the right to use other clouds, continuing the multi-vendor posture established with the March 2026 Nvidia commitment.
- Valuation context: Thinking Machines raised a $2 billion seed led by a16z at a $12 billion valuation in July 2025; Techfundingnews reported in early 2026 that a new round is being discussed near a $50 billion valuation.
- Product: Tinker, launched in October 2025, automates custom frontier-model training via reinforcement learning — a workload Google specifically highlighted its infrastructure can support.
- Market reaction: Alphabet shares rallied in intraday trading after the news, with investors reading the deal as validation of Google's AI-infrastructure strategy.
What Developers and Users Are Saying
Reaction in developer circles has been mixed. Infrastructure engineers on X/Twitter note that Thinking Machines is now running production reinforcement-learning workloads on Nvidia GB300, Google TPUs (implicitly, via Cloud) and Nvidia B200/H200 systems from its March deal — a logistics footprint that only a handful of labs can coordinate. Critics on Hacker News and r/MachineLearning pointed out that Thinking Machines has still shipped only one public product (Tinker) since launching in February 2025, and that April's separate news — Meta reportedly poaching five founding researchers, including one reportedly paid $1.5B — raises questions about talent stability even as the compute bill keeps climbing.
The bull case, shared widely on Techmeme, is simpler: every frontier lab that locks in GB300 capacity today is removing it from the market for everyone else, and Murati's team has now secured guaranteed Nvidia supply and preferred pricing inside a hyperscaler. That is an extremely rare combination.
What This Means for Developers
For developers building on top of Thinking Machines' Tinker API, the practical effect is more capacity and lower latency — Google said GB300 systems will deliver roughly 2× faster training and serving than the previous generation, which should translate directly into shorter fine-tune times and cheaper inference on custom models. For developers who run on Google Cloud more broadly, the deal confirms that GB300 systems are now in general availability for large customers and will roll further downstream over the coming quarters.
For everyone else, this is another data point that frontier-lab compute is consolidating into a very small number of very large contracts. Expect price pressure on smaller Nvidia-backed clouds as the biggest GPUs funnel into OpenAI, Anthropic, xAI and Thinking Machines deals first.
What's Next
Thinking Machines has not disclosed a launch timeline for its next product beyond Tinker, but the lab has signalled that reinforcement-learning workloads — the core of its research direction — are the primary target for the new Google capacity. Investors are watching for a potential $50B-valuation up-round reported by TechFundingNews, and for whether Murati's team can stabilise after the April talent departures to Meta. TechCrunch said Google plans to roll GB300 availability to more customers in the coming quarters.
Sources
- TechCrunch — Exclusive: Google deepens Thinking Machines Lab ties — primary source, broke the story April 22, 2026
- TechCrunch — Thinking Machines Lab inks massive compute deal with Nvidia — background on the March 2026 Nvidia commitment
- TipRanks — Google stock jumps after Thinking Machines deal — market reaction
- The Next Web — Meta hires five Thinking Machines Lab founders — talent-stability context
- TechFundingNews — Thinking Machines Lab nears $50B valuation — valuation context
- Wikipedia — Thinking Machines Lab — company background
Stay up to date with Doolpa
Subscribe to Newsletter →