Meta and Broadcom expand partnership to co-develop multiple generations of custom AI chips

Meta and Broadcom will co-develop multiple generations of MTIA chips over the next two years, starting with over 1 gigawatt of custom silicon. The deal covers chip design, packaging, and networking for Meta's AI inference workloads.

Categorized in: AI News IT and Development
Published on: Apr 15, 2026
Meta and Broadcom expand partnership to co-develop multiple generations of custom AI chips

Meta and Broadcom to Co-Develop Next-Generation AI Chips

Meta announced an expanded partnership with Broadcom to design and manufacture multiple generations of MTIA (Meta Training and Inference Accelerator) chips over the next two years. The agreement commits to deploying more than 1 gigawatt of custom silicon initially, with plans to scale to multiple gigawatts.

MTIA chips handle inference and recommendation workloads across Meta's apps and services. The partnership extends beyond chip design to include advanced packaging and networking infrastructure for Meta's AI compute clusters.

What the partnership covers

Broadcom will contribute its XPU platform, a technology for building custom AI accelerators. The company will also provide advanced Ethernet technologies to connect Meta's expanding AI infrastructure.

Meta's approach treats chip selection as a portfolio decision - matching specific accelerators to specific workloads to balance performance and cost. MTIA handles ranking, recommendations, and generative AI tasks.

Hock Tan, Broadcom's president and CEO, will step down from Meta's board to serve as an advisor on the company's custom silicon roadmap.

Why this matters for infrastructure teams

Custom silicon partnerships like this shape how large-scale AI systems operate. IT and development professionals building or deploying AI applications need to understand the hardware constraints and capabilities that accelerators like MTIA provide.

The multi-gigawatt commitment signals sustained investment in inference infrastructure - the systems that serve AI models to end users rather than training them. This is where most production AI workloads run.

For teams working on AI for IT & Development, understanding custom accelerators and their role in infrastructure strategy is increasingly relevant as organizations move beyond cloud-based models.

Software engineers working on AI systems should understand how infrastructure decisions at this scale affect application design, latency, and cost.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)