Google Photos' AI Editing Hits a Legal Wall in Texas and Illinois
Google is pushing conversational AI into Photos with features like "Ask Photos" and AI-assisted edits. Users can refine searches, add elements to images, and request edits in plain language. But if you're in Texas or Illinois, these tools are blocked. The reason isn't technical-it's legal.
Why Texas and Illinois Are Blocked
Both states tightly regulate biometric data. Illinois' BIPA requires informed, written consent before collecting or using a "biometric identifier," and it lets individuals sue for violations. Texas has a similar consent requirement through its biometric statute.
Google Photos' Face Groups is likely the sticking point. Organizing images by faces can be treated as creating a facial template-biometric data-triggering consent, disclosure, retention, and deletion obligations. Given prior litigation, shutting off features in these states is the conservative move.
The Legal Trigger: Biometric Statutes
- Illinois (BIPA): Private right of action, statutory damages up to $5,000 per willful violation, strict consent and policy requirements. See the statute: 740 ILCS 14.
- Texas: Consent required to capture a biometric identifier and limits on disclosure and retention. Statute text: Tex. Bus. & Com. Code ยง 503.001.
Reports note that recent AI expansions for Google Photos are rolling out widely but remain unavailable in these two states. The practical read: features tied to face data are legal risk magnets under these statutes.
Past Lawsuits Set the Tone
Google settled a BIPA class action for $100 million over Google Photos in 2022, without admitting wrongdoing. Meta paid $650 million in 2021 tied to photo tagging. After those payouts, product counsel tend to default to geo-fencing or disabling face-based features until risk is contained.
Risk Profile: Face Grouping and Conversational Edits
Conversational editing sounds harmless until prompts rely on identifying who is in a photo. If any pipeline creates, stores, or references a facial template, you've entered biometric territory. Even on-device models can create exposure if templates persist or if consent flows fall short.
Compliance Playbook for Product Counsel
- Consent first: Clear, written, informed consent for biometric capture and use. Separate flows for minors with verifiable parental consent.
- Notice and policy: Public retention and deletion policy aligned with BIPA ยง15(a). Explain purposes, disclosures, and retention terms in plain language.
- Data minimization: Prefer on-device processing. Avoid storing facial templates or turn them into ephemeral, non-reconstructable signals.
- Feature gating: Make face-dependent features opt-in. Provide an easy opt-out that deletes templates and halts processing.
- Geo-controls: If consent and architecture aren't airtight, restrict features by state. Validate location logic against VPN and account settings edge cases.
- Retention and deletion: Map data flows. Tie deletion to purpose completion or set short, documented timelines. Honor user-initiated deletion promptly.
- Vendor terms: Lock down processors and subprocessors. Ban secondary use, model training on biometrics, and data export.
- Audits and logs: Keep evidence of consent, versioned policies, DPIAs/PIAs, and security reviews. Log template creation, access, and deletion events.
- Litigation readiness: Preserve artifacts. Structure arbitration and class waivers carefully, but don't rely on them to solve BIPA exposure.
Product Design Patterns That Reduce Exposure
- Face-free defaults: Let "Ask Photos" run on metadata (time, place, objects) without face grouping. Offer a separate, explicit opt-in for face features.
- On-device, ephemeral templates: If face features are necessary, keep processing local and purge templates quickly. Prove it with technical controls and audits.
- Granular permissions: Distinguish "identify this person" from "find photos with two people smiling." Seek consent only where biometric identification is needed.
- Revocation and cleanup: One switch to revoke consent and wipe templates across all services and backups.
Patchwork Availability Is Here to Stay
Multiple reports tie Google's limited rollout to these statutes and prior settlements. Colorado has also moved on AI-related content rules for political media, signaling broader scrutiny. Without federal preemption, expect more geo-fenced features and cautious defaults.
Action Items for Legal Teams
- Run a biometric data inventory: where facial vectors or templates are created, stored, or inferred.
- Ship consent + policy updates before enabling any face feature in Illinois or Texas.
- Stand up deletion proofs: logs and attestations for template removal on opt-out or account closure.
- Review marketing claims: avoid implying identification if the feature isn't consented or enabled in restricted states.
- Prepare state-by-state go/no-go criteria and an executive briefing on exposure and costs.
What This Means for Counsel
AI editing can ship nationwide if it avoids face identification or nails consent, retention, and deletion. If not, expect carve-outs. The safer route is to treat face features as a separate, high-friction product surface with measurable risk controls.
If your team needs practical upskilling on AI product issues by role, see these resources: AI courses by job.
Your membership also unlocks: