Meta and Broadcom Expand Multi-Gigawatt MTIA Custom Silicon Partnership Through 2029
Summary
Meta and Broadcom have announced an expanded multi-year, multi-generation partnership to co-develop the Meta Training and Inference Accelerator (MTIA) — custom chips purpose-built for inference and recommendation workloads — with an initial deployment commitment exceeding 1 gigawatt and plans for a sustained multi-gigawatt rollout through 2029.
The agreement positions MTIA as the hardware backbone for Meta’s AI ambitions across WhatsApp, Instagram, and Threads, and signals the company’s accelerating push to reduce dependence on third-party accelerators like Nvidia’s GPUs. Building on Broadcom’s XPU platform — a framework for designing custom AI accelerators — the partnership covers chip design, advanced packaging, and high-bandwidth Ethernet networking across Meta’s expanding data center clusters. Meta has said it plans to deploy four new MTIA chip generations within two years to support ranking, recommendations, and generative AI workloads.
The MTIA chip is notable for being the industry’s first 2nm AI compute accelerator, according to Broadcom. By co-designing across multiple silicon generations rather than procuring off-the-shelf hardware, Meta trades short-term procurement simplicity for long-term performance optimization and cost control at scale — a bet that makes sense given the volume of inference it runs daily across billions of users.
In a governance development tied to the deal’s scale, Broadcom CEO Hock Tan will step down from Meta’s board of directors and transition into an advisory role focused on Meta’s custom silicon roadmap.
Sources: https://about.fb.com/news/2026/04/meta-partners-with-broadcom-to-co-develop-custom-ai-silicon/
https://www.broadcom.com/company/news/product-releases/64236


