Meta builds an Applied AI engine to move faster on superintelligence
Mark Zuckerberg is creating a new applied AI engineering organization inside Meta to speed the path toward superintelligence. The group will be led by Maher Saba, reporting directly to CTO Andrew Bosworth, and structured to keep decision cycles short and execution tight.
The mandate is clear: stand up the "data engine" that makes Meta's models learn faster. Expect focus on data processing, tooling, and rigorous model evaluations that plug into Meta's broader research push.
What's changing
- Meta is splitting AI into specialized units: Alexandr Wang's research lab (Meta Superintelligence Labs), Saba's applied engineering org, and Bosworth's broader technology strategy.
- Saba's org will run with an unusually flat structure-reportedly up to 50 ICs per manager-to reduce friction and speed decisions.
- Workstreams zero in on data pipelines, tooling, and evals that shorten the loop from data to model improvements.
- Multiple teams will share responsibility for next-gen models, including projects code-named Avocado and Mango.
- Months earlier, Meta cut roughly 600 roles from Superintelligence Labs; the memo at the time read: "By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact."
- Only two researchers left Wang's ~100-person team after equity vesting in November, a sign of stability amid the shift.
Why this move matters for executives
This is the operating model many enterprises will copy: decouple research from applied engineering, fund the data engine as core infrastructure, and build redundancy across parallel teams to avoid single points of failure. The bet is speed-shorter cycles, clearer ownership, fewer handoffs.
The risk is managerial load and governance drift. Flat orgs work only if tooling, automation, and decision rights are explicit.
Executive takeaways you can apply
- Treat the data engine as a product. Own ingestion, labeling, privacy, evaluation, and feedback loops end-to-end. Measure throughput, freshness, quality, and cost per usable token/example.
- Separate research, applied, and platform. Define clear contracts: APIs, artifacts, SLAs, and release gates. Research explores; applied ships; platform abstracts common infra.
- Instrument decision latency. Set SLAs for design, launch, and rollback decisions. Replace committees with single-threaded owners and written RFCs.
- Design for redundancy. Run parallel bets on critical paths (data curation, eval frameworks, inference optimizations). Budget overlap intentionally to avoid stalls.
- Make evals your truth. Standardize offline/online tests, safety reviews, and regression checks before promotion. Publish weekly scorecards org-wide.
- Recenter career paths on IC impact. Reward throughput and model wins, not team size. If spans exceed 30-50 ICs, fund ops support and automation before adding layers.
- Track the right KPIs. Time-to-train, time-to-deploy, percentage of data that's usable, eval coverage, model win rate vs. baseline, cost per token/sample, and incident MTTR.
What to watch at Meta
- Shipping cadence and external benchmarks for Avocado and Mango.
- Cycle time from research prototypes to applied releases.
- Hiring mixes shifting toward data engineering, eval tooling, and infra roles.
- Signals that the 50:1 span is sustainable: fewer layers, faster launches, stable quality.
- Attrition rates post-reorg-especially among senior ICs and tech leads.
Risks to manage in a similar reorg
- Manager overload and delayed coaching in ultra-flat teams.
- Duplicated tooling across parallel groups without a shared platform layer.
- Knowledge silos if artifacts and decisions aren't documented and searchable.
- Governance gaps on privacy, safety, and compliance as launch velocity increases.
- Short-term execution outpacing long-term research depth after headcount cuts.
90-day action checklist for your org
- Map your triad: Research, Applied, Platform. Publish interfaces, owners, and SLAs.
- Appoint a single owner for the data engine. Fund build vs. buy and set a quarterly throughput target.
- Stand up a unified evaluation platform with gating criteria and a weekly metrics review.
- Create at least two parallel paths for one mission-critical capability to remove a single point of failure.
- Set a decision latency target (e.g., 48-72 hours for green/yellow calls) and enforce it.
- Remove one org layer where possible; expand spans only with ops automation and strong staff ICs.
- Lock a compute and data governance plan aligned to privacy and safety requirements.
Zuckerberg's play is simple: elevate ICs, flatten where possible, and make the data engine the multiplier. If one team slows, another advances. That's how you keep momentum when models, data, and infrastructure all move at once.
For more operator-level frameworks and templates executives use to run AI programs, see AI for Executives & Strategy.
Your membership also unlocks: