Meta Extends Broadcom Custom-Chip Deal Through 2029 With 1 GW of 2nm MTIA Silicon (April 2026)
Meta and Broadcom on April 15, 2026 extended their custom AI-chip partnership through 2029, anchoring it with 1 gigawatt of 2nm MTIA capacity and moving Broadcom CEO Hock Tan off Meta's board into an advisory role focused on silicon.
Meta and Broadcom on extended their custom AI-silicon partnership through 2029, anchoring it with a commitment to deploy more than 1 gigawatt of next-generation MTIA capacity — the first such chips in the industry built on a 2-nanometer process. As part of the deal, Broadcom CEO Hock Tan will step off Meta's board at the next annual meeting and move into an advisory role focused solely on Meta's custom silicon roadmap.
What Happened
Broadcom announced the extension in a press release on April 15, describing it as covering "multiple generations" of Meta Training and Inference Accelerator (MTIA) chips designed jointly by the two companies, along with related packaging, co-processors and networking fabric. The initial 1 GW tranche — enough to power roughly 750,000 U.S. homes — is framed as "the first phase of a sustained, multi-gigawatt rollout" that Meta expects to reach multiple gigawatts by 2027. Meta CEO Mark Zuckerberg said on an earnings-related post that the company is investing "across chip design, packaging, and networking to build out the massive computing foundation we need to deliver personal superintelligence to billions of people."
The announcement lands weeks after Meta's March 11, 2026 disclosure that it is shipping four MTIA generations in two years — MTIA 300, 400, 450 and 500 — with MTIA 300 already running ranking and recommendation inference across Facebook and Instagram. The new Broadcom agreement commits both companies to the silicon, substrate and rack-scale networking behind the next three generations, starting with MTIA 400 in a 72-chip scale-up rack and MTIA 500 at 2nm.
Key Details
- 1 GW initial capacity, multi-GW by 2027 — Broadcom's release calls out the 1 gigawatt figure as an opening installment, with deployment expected to scale to "multiple gigawatts" before 2028.
- First 2nm AI chips in the industry — Broadcom confirmed that MTIA 500 silicon will use a 2-nanometer process, putting it ahead of both Nvidia's current Blackwell generation and Google's TPU v7 on process node.
- Four chip generations in two years — MTIA 300 is live; MTIA 400, 450 and 500 are scheduled across 2026 and 2027, with a stated cadence of one new generation "every six months or less."
- Hock Tan exits Meta's board — Broadcom's CEO will leave Meta's board at the 2026 annual meeting and take a Meta silicon advisory role. His advisory focus is described as chip design, packaging and networking roadmap.
- Rack-scale design — Meta's published architecture puts 72 MTIA 400 chips in a single scale-up domain, mirroring the NVL72 design pattern Nvidia ships for Blackwell but on Meta's in-house silicon.
- Runs on PyTorch-native stack — MTIA's compiler and runtime are built around PyTorch, keeping Meta's model teams on the same toolchain they use for Nvidia training.
What Developers and Industry Analysts Are Saying
Reaction on Hacker News and r/hardware treats the deal as the clearest signal yet that hyperscalers will diversify away from Nvidia for inference while keeping Nvidia for frontier training. Commenters flagged the 2nm node — fabbed on TSMC — as the real story: it means Meta and Broadcom are willing to pay the highest wafer prices in the industry to claw back power efficiency on workloads that now run billions of times per day. On X (formerly Twitter), infrastructure analyst Dylan Patel of SemiAnalysis called the extension "the largest single custom-silicon commitment ever disclosed by a public company," while Broadcom's stock rose ~3.2% intraday on the news according to CNBC.
The reaction was not universally positive. A recurring concern on Hacker News is that Meta's in-house silicon track record is short — MTIA 1 and 2 were limited-deployment parts — and that a 2nm scale-up to 1 GW in 18–24 months is historically aggressive for a non-pure-play chip company. Critics also noted that Hock Tan's simultaneous exit from Meta's board may signal that the audit committee flagged governance concerns now that the financial scale of the relationship has reached tens of billions of dollars.
What This Means for Developers
For application developers, nothing changes immediately — MTIA is inference silicon for Meta's own products (Feed, Reels, WhatsApp business AI, Meta AI), not a public cloud offering. The medium-term implication is that Meta's external AI costs should fall, which likely translates into Meta AI API pricing pressure on OpenAI and Anthropic, and cheaper generative features inside WhatsApp and Instagram. For ML infrastructure engineers, the announcement reinforces a broader theme: PyTorch is cementing its role as the lingua franca across custom silicon (MTIA), TPUs (Google/Anthropic) and GPUs (Nvidia/AMD), so investing in PyTorch-level portability is looking increasingly like the right bet.
For investors and operators watching the AI capex cycle, the signal is even louder: Meta has now committed to 6 gigawatts of AMD GPUs, millions of Nvidia chips, Arm-designed CPUs, and multi-gigawatts of custom MTIA — i.e., it is hedging across every serious silicon vendor at once rather than picking a winner.
What's Next
The first 1 GW tranche begins deployment in 2026, with scale-up through 2027. MTIA 400 is slated for 2H 2026 production based on earlier TrendForce reporting; MTIA 500 at 2nm is expected in 2027. Broadcom's next earnings call, scheduled for June 2026, is likely to be the first time the deal is broken out as a reportable customer line. Hock Tan's board exit will be formalized at Meta's 2026 annual meeting.
Sources
- Broadcom press release — primary announcement with deal structure and scope
- CNBC — 1 GW commitment, Hock Tan board transition, stock reaction
- The Next Web — 2029 timeline and 2nm process confirmation
- Meta AI Research blog — "Four MTIA Chips in Two Years" — architectural context
- About Meta newsroom — March 2026 custom silicon overview
- Tom's Hardware — hyperscaler inference-chip context
Stay up to date with Doolpa
Subscribe to Newsletter →