Meta Hit With Class Action Over AI Smart Glasses Privacy, Human Review of User Footage

Meta faces a class action saying its AI glasses allowed human review of user videos by contractors overseas. Meta says files stay local unless shared, with safeguards.

Categorized in: AI News Legal
Published on: Mar 09, 2026
Meta Hit With Class Action Over AI Smart Glasses Privacy, Human Review of User Footage

Meta Sued Over Privacy Claims Tied to AI Smart Glasses Data Review

Two U.S. customers have filed a class action against Meta in federal court in San Francisco, alleging the company misled buyers about the privacy posture of its AI-enabled smart glasses. The complaint says Meta marketed the product as "designed for privacy" while allowing human contractors overseas to review user-captured footage.

Case Snapshot

  • Defendant: Meta Platforms, Inc.
  • Products: Ray-Ban and Oakley-branded AI smart glasses
  • Plaintiffs: Individuals in California and New Jersey, seeking to represent a nationwide class
  • Theory: Misrepresentation and omission regarding privacy practices and human review of shared content
  • Relief sought: Compensatory and punitive damages under three California consumer protection laws

Alleged Data Handling and Human Review

The complaint asserts that when users activate the glasses' AI features, video and images are sent to Meta's servers. It further alleges this material is routed to a subcontractor in Kenya, where large teams label and categorize the footage to train Meta's AI systems.

Plaintiffs cite a report from a Swedish newspaper describing thousands of contractors reviewing visual data generated by Meta's AI technologies. They argue average consumers would not expect workers in another country to view recordings from everyday life.

Marketing vs. Practice

According to the filing, Meta's marketing gave buyers the impression the device prioritized personal privacy and user control. Plaintiffs say the company had a duty to disclose practices that would matter to a reasonable purchaser, including potential human review outside the U.S.

Product Features and Data Flows

The glasses include an AI assistant that responds to voice commands and can translate text, identify objects and landmarks, provide directions, send messages, and place calls. A camera supports hands-free photos and videos up to three minutes, with options to stream to Facebook and Instagram.

The lawsuit contends these features depend on sending visual data to Meta's cloud, where footage can be analyzed using image recognition, location signals, and other context before being stored and used to improve AI models.

Meta's Response

Meta said it is reviewing the claims and cannot comment in detail yet. A spokesperson stated the glasses let people interact with AI hands-free and that, unless users choose to share media with Meta or others, files remain on the device.

The company acknowledged that in some cases shared content may be reviewed by contractors to improve the service. Meta says safeguards are used to filter data and reduce exposure of personal details during that process.

Key Issues for Legal Teams

  • Advertising claims and omissions: Whether "designed for privacy" and related messaging were likely to mislead a reasonable consumer, and if disclosures about human review and cross-border processing were adequate and conspicuous.
  • Consent and user settings: What users agreed to when enabling AI features, sharing media, or opting in to product improvement; clarity and placement of disclosures in setup flows, apps, and policies.
  • Data pathways and minimization: Scope of data sent to the cloud; retention, de-identification, and filtering practices; linkage to user accounts or device identifiers.
  • Human-in-the-loop review: Nature of contractor tasks, training materials, access controls, redaction protocols, and vendor oversight (including location and legal status of subcontractors).
  • Class certification: Commonality and predominance given variations in disclosures across versions, channels, and updates; reliance and materiality for putative class members.
  • Arbitration/class waiver: Enforceability of any terms of service provisions; formation issues for device purchasers vs. app users.
  • Standing and damages: Concrete injury theories tied to alleged privacy harms and misrepresentation; price premium models and restitution frameworks.
  • Choice of law and extraterritoriality: Application of California consumer statutes to out-of-state purchases and conduct, and treatment of cross-border processing.
  • Punitive exposure: Evidence of knowledge, concealment, or recklessness, if any, in marketing and vendor management.

Early Discovery Priorities

  • For plaintiffs: Full marketing archive; UX flows and disclosures; data flow diagrams; vendor agreements and SOWs for labeling/review; audit logs; retention schedules; internal risk assessments; A/B tests on privacy messaging; complaint history.
  • For Meta: Named plaintiffs' purchase paths and exposure to disclosures; consent records; app settings use; actual sharing behavior; alternative causation; valuation of alleged price premium; variance across class members.

Compliance Notes for AI Wearables

  • Align claims with engineering reality; test "privacy-first" statements against actual data routing and vendor use.
  • Make human review and cross-border processing explicit at setup and before upload; use layered, plain-English disclosures.
  • Offer on-device processing where feasible; default to opt-in for model improvement; provide granular, reversible controls.
  • Implement contractor safeguards: strict access controls, redaction, minimal retention, auditing, and enforceable confidentiality.
  • Track and version disclosures; maintain evidence that users saw and accepted key terms tied to specific features.

What to Watch

  • Pleadings on whether marketing phrases are actionable vs. puffery and if omissions are material at the point of sale.
  • Any motion practice on arbitration, class waiver, or standing; scope of discovery into vendor operations in Kenya.
  • How the court treats data used to improve AI models and whether that creates distinct injury or restitution theories.

Practical Resource

For deeper, practice-ready training on AI, privacy, and consumer-protection litigation workflows, see AI for Legal.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)