Mamdani to Pull Plug on Adams-era NYC AI Bot That Told Businesses to Break the Law

NYC will scrap its error-prone AI business chatbot after it gave illegal guidance on wages, tips, and cash rules. Expect tighter oversight and narrower use going forward.

Categorized in: AI News Legal
Published on: Jan 31, 2026
Mamdani to Pull Plug on Adams-era NYC AI Bot That Told Businesses to Break the Law

NYC to scrap error-prone AI business chatbot, signaling tighter controls on automated legal guidance

New York City Mayor Zohran Mamdani plans to shut down the prior administration's AI chatbot after it repeatedly gave businesses advice that would break city and state law. The decision comes amid a $12 billion budget gap; the bot cost roughly half a million dollars to run and reportedly nearly $600,000 to build.

Launched in 2023 under former Mayor Eric Adams as part of the MyCity digital overhaul, the bot ran on Microsoft's cloud platform and was marketed as a quick way for business owners to check rules. It instead produced confident but wrong guidance on high-stakes issues, including tip sharing, source-of-income discrimination, minimum wage, and cash acceptance requirements.

The city later added disclaimers and narrowed the scope of questions, but the new administration says the tool was "functionally unusable." A spokesperson confirmed there is no takedown date yet.

What went wrong

Public testing in 2024 showed the bot telling landlords they could refuse tenants with Section 8 vouchers, approving illegal tip skimming, and incorrectly stating businesses could refuse cash despite a 2020 law. It also missed basics like the minimum wage.

Officials tried to patch it with warnings like "don't use this as legal or professional advice," then throttled the types of questions allowed. The core problem remained: a general-purpose model improvising legal guidance without authoritative sourcing or reliable guardrails.

Why this matters for legal teams

  • Reliance risk for businesses: If a company follows a city-branded chatbot into noncompliance, it still faces enforcement. Disclaimers help the city, not the user caught violating wage, labor, or civil rights laws.
  • Government exposure: While sovereign protections are real, negligent misrepresentation and due-process arguments may surface where official channels give contradictory direction. At minimum, this creates litigation friction and political scrutiny.
  • Civil rights liability: Advice permitting source-of-income discrimination invites complaints and damages under the NYC Human Rights Law.
  • Procurement and vendor risk: Cloud and model providers should be bound to accuracy, uptime, monitoring, prompt security, content filters, and indemnities. Many contracts don't go far enough.
  • Records and discovery: Prompts, outputs, training data sources, and moderation decisions are likely disclosable. Maintain logs, versioning, and decision trails.

Practical steps if you publish AI answers to legal or regulatory questions

  • Narrow the scope. Restrict to topics with authoritative sources and stable rules. Block or route edge cases to human channels.
  • Ground answers. Use retrieval from an official, versioned corpus (statutes, rules, FAQs) and show citations with links to current authority.
  • Design for safety, not just disclaimers. Prominent notices, plus UX that pushes users to the controlling rule text when risk is high.
  • Escalate high-stakes queries. Human review for wage, housing, licensing, discrimination, and enforcement topics.
  • Test like you mean it. Pre-launch red teaming with legal, compliance, and domain SMEs; continuous monitoring with drift alerts.
  • Contract like a regulator. Accuracy commitments, audit rights, data handling standards, model/change logs, incident reporting, and indemnities. Require insurance that actually covers AI output harms.
  • Accessibility and language access. Ensure ADA compliance and multilingual support consistent with city and state requirements.
  • Privacy by default. Minimize PII collection, set retention limits, and document data flows for audits and incident response.

What to watch in NYC

Mamdani's move won't close the deficit but signals a shift toward disciplined AI deployment in government. Expect tighter procurement standards, more limited use cases, and stronger oversight of any public-facing guidance tools under MyCity.

For businesses, don't treat city-branded chatbots as safe harbors. Confirm against primary sources or official agency pages before acting.

Authoritative references

Training and policy resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide