Meta's Strategic Pivot: Heavy AI Spend Meets Legal Heat
Date: 02.02.2026
Meta Platforms is moving hard into AI and trimming the Metaverse. The spending plan is massive, the layoffs are real, and a high-stakes trial starts tomorrow. For legal teams, this is a signal to get ahead of product risk, disclosure, and safety-by-design questions.
The numbers driving this shift
- Capex: $115-$135B in 2026 vs. roughly $72B in 2025.
- Q4 2025 results: $59.89B revenue; $8.88 EPS, both above expectations.
- Stock: Closed at $718.10 on Friday, just below its 52-week high.
- Analyst targets: Pivotal Research $910 (Buy), Truist $900, Bank of America $885.
Translation: leadership is willing to spend heavily to lead in AI. The Street views this as growth investment rather than waste, so long as ad revenue and AI product gains keep pace.
Why in-house counsel should care
AI at this scale brings scrutiny. Expect questions on data sourcing, recommender design, third-party GPU contracts, and disclosures around model behavior and safety. Product teams will seek faster legal guidance on high-risk features that touch minors, privacy-sensitive data, and content ranking.
Capital allocation on this level also tightens the link between product risk and securities risk. If AI features are material to growth, risk factors, MD&A, and internal controls need to keep up.
Reality Labs downsizing: legal signals
Meta laid off more than 1,000 Reality Labs employees (about 10%) in January and closed multiple VR game studios. Headcount and studio closures trigger employment compliance questions (e.g., notice obligations, separation terms), plus vendor and licensing clean-up.
Resources are shifting to AI-integrated wearables and smart glasses. That means new product safety files, privacy-by-default controls, voice/video recording policies, and clearer consent flows in physical-world contexts.
New Mexico lawsuit starts tomorrow: key issues
Jury selection is set for February 2, 2026, in a case alleging Meta's platforms facilitate child exploitation through dangerous algorithmic recommendations. The fact it's going to trial raises the stakes: a plaintiff win could invite similar cases from other states or private plaintiffs.
Watch the arguments around intermediary immunity and carve-outs. Section 230 defenses have limits, including exceptions created by FOSTA-SESTA for sex trafficking claims, and plaintiffs continue to test theories that target product design and recommendation engines.
- Section 230 text: Cornell LII
- COPPA rule overview: FTC
Potential outcomes to consider: injunctive limits on certain recommendation patterns for minors, mandated safety controls, reporting and monitoring requirements, and monetary penalties. Discovery could force detailed disclosures about model inputs, ranking signals, and escalation protocols, which may set informal benchmarks for future suits.
Governance and disclosure implications
With Capex jumping to the $115-$135B range, boards will want clear oversight of AI safety testing, incident response, and child-safety controls. If AI features materially affect engagement or ad yield, ensure risk factors and forward-looking statements reflect new operational hazards and legal exposure.
On the privacy side, re-validate data processing bases, age-gating, geo-specific restrictions, and data minimization for AI-driven features in wearables. Update vendor and cloud agreements to cover AI model usage, evaluation data, and safety audits.
What to watch next
- Early rulings in the New Mexico trial on admissibility and jury instructions tied to recommendation systems.
- Any emergency relief requests seeking feature changes during trial.
- Signals from regulators on youth protections, dark patterns, and transparency duties for algorithmic feeds.
- Meta's next disclosures on AI infra, model roadmaps, and safety investments relative to the spend.
Action checklist for legal teams
- Run a gap review on youth-safety features: defaults, reporting tools, content filters, and escalation SLAs.
- Refresh Section 230 and FOSTA-SESTA analyses for recommendation and ranking features that touch minor safety.
- Stand up an AI product review lane: pre-launch testing, documentation, and red-teaming signoffs tied to high-risk use cases.
- Tighten disclosure controls for AI milestones, model failures, and safety incidents with potential materiality.
- Re-check WARN/state notice triggers and severance documentation for the RL downsizing and studio closures.
Bottom line
Meta is betting big on AI while trimming legacy bets and heading into court. For counsel, this is about tightening governance, anticipating product design claims, and keeping disclosures aligned with the size of the bet.
If your legal team needs to build practical AI fluency for oversight and policy, see curated training by job function: Complete AI Training.
Your membership also unlocks:
Speak Up on AI in Clinical Care - HHS RFI Comments Due February 23, 2026