NYC explores AI cameras to spot subway fare evasion
New York's transit agency is piloting subway gates that use cameras and artificial intelligence to flag suspected fare evasion. The move has kicked off a fresh privacy debate across government, retail, and tech circles.
Here's what's on the table, why it matters, and how teams should respond.
What the MTA is testing
The Metropolitan Transportation Authority is trying gates that trigger a camera when someone skips payment. According to the manufacturer, Cubic, the system records a five-second clip and generates a physical description of the person. That description is then sent to the MTA.
In December, the agency also requested information from vendors for systems using computer vision and AI to detect "unusual or unsafe behaviors." The goal: reduce fare evasion and improve station safety without grinding operations to a halt.
Retailers are moving in the same direction
Wegmans recently posted signs in some New York stores stating they use facial recognition to identify people previously flagged for misconduct. The company says it does not collect retinal scans or voice prints and did not disclose how long it keeps data.
New York City law requires clear notice when biometric tech is in use. Details are laid out in the city's Biometric Identifier Information Law, which covers disclosures and more. Read the law.
Other retailers using facial recognition in the city include T-Mobile, Madison Square Garden, Walmart, Home Depot, Fairway, and Macy's, according to Michelle Dahl of the Surveillance Technology Oversight Project (S.T.O.P.). "New Yorkers are generally sleepwalking into this surveillance state, and it's time for us to wake up and take action on it," she said.
Accuracy and bias concerns
Facial recognition can be less accurate for minorities, especially Black people, raising the risk of misidentification. That's not theoretical; it's been documented in independent evaluations. See NIST's demographic effects report.
The New York Police Department has used biometric tools, including facial recognition, for years. Records released by S.T.O.P. and Amnesty International show that by April 2020 the NYPD had spent over $5 million on facial recognition, adding at least $100,000 annually after that.
Why this matters to government, IT, and development teams
- Public trust hinges on accuracy, clear limits, and oversight. False positives carry real costs for riders and communities.
- Compliance is more than a sign on the wall. Agencies and retailers need policies that match what the system actually does-collection, retention, access, and deletion.
- Procurement decisions today will lock in technical and ethical tradeoffs for years. Bake in auditability, transparency, and redress up front.
Practical guardrails for agencies and operators
- Define the problem precisely: fare evasion detection, not identity tracking. Set measurable targets (e.g., reduce false positives by X%, human review within Y minutes).
- Constrain data: short clip buffers, strict retention windows, and automatic deletion. Use role-based access, encryption, and detailed audit logs.
- Minimize identifiers: prefer event metadata or non-unique descriptors over persistent biometric templates unless absolutely necessary and legally justified.
- Demand bias testing: require disparate impact analysis across demographic groups and publish summary findings.
- Human-in-the-loop: no automated enforcement. Ensure every alert is reviewed by trained staff with clear escalation and appeals.
- Transparency: signage that matches capabilities, plus public documentation of what's collected, who can access it, and how long it's kept.
- Independent oversight: engage external auditors and community advisors; schedule periodic reviews and sunset clauses for pilots.
- Legal review: align with local notice requirements and broader civil rights protections; update policies as capabilities change.
Build guidelines for developers
- Collect less by default: trigger capture only on clear events, keep clips short, and avoid building identity databases when a description will do.
- Evaluate across demographics: report false positive/negative rates by group and tune thresholds to reduce harm, even if it costs a bit of detection.
- Document everything: model cards, data sheets, and end-to-end data flows. Make it easy to audit what the system did and why.
- Design failure modes: if the model is uncertain, fall back to human review. Don't block or penalize riders based on low-confidence outputs.
- Security by design: encrypt at rest and in transit, isolate model outputs from identity systems, and monitor for misuse.
What to watch next
- Details on retention periods, data sharing, and who reviews alerts.
- Independent accuracy and bias audits-and whether results are public.
- Whether pilots stay narrow or grow beyond fare evasion into broader "behavior" monitoring.
- Clear outcomes: did fare evasion drop without creating new harms?
If your team is standing up or evaluating AI-driven monitoring, make sure your people understand computer vision, evaluation, and compliance basics. For structured upskilling by role, see Complete AI Training: Courses by Job.
Your membership also unlocks: