New York Cracks Down on Government AI, Requiring Transparency and Bias Safeguards

Gov. Kathy Hochul signed S.B. 7599, requiring agencies to disclose their AI tools and how they limit bias. Start now: inventory systems and publish plain summaries.

Categorized in: AI News Government
Published on: Dec 21, 2025
New York Cracks Down on Government AI, Requiring Transparency and Bias Safeguards

New York Tightens AI Rules for Government: What Agencies Need to Do Now

New York Gov. Kathy Hochul has signed S.B. 7599, adding new limits and transparency requirements on how government agencies use artificial intelligence. Agencies from law enforcement to local school districts will need to publicly disclose which systems they use and what steps they take to reduce bias in decisions.

It's not yet clear whether lawmakers will revise the legislation post-signing, a move the governor has used on politically contentious bills. The intent is straightforward: use the tech thoughtfully, reduce risk, and maintain public trust.

What S.B. 7599 Does

  • Requires public disclosure of AI systems used by government entities.
  • Requires disclosure of measures taken to prevent bias in decision-making.

For exact language, review the bill text on the New York State Senate site: S.B. 7599.

Who's Affected

The law covers a wide range of public entities, including state and local agencies, law enforcement, and school districts. If your team uses algorithms or automated decision systems in any part of service delivery, procurement, hiring, enforcement, or student services, you should assume you're in scope and verify with counsel.

What Agencies Should Do Now

  • Build an AI system inventory: List every AI or automated decision tool in use, pilot, or procurement. Identify owners, purposes, data inputs, and affected populations.
  • Publish a public AI use page: Provide plain-language descriptions of each system and the steps you take to prevent bias. Keep it updated.
  • Document bias mitigation: Record data quality checks, disparate impact testing, human review steps, and monitoring plans. Specify escalation paths and appeal options for affected individuals.
  • Update procurement and vendor contracts: Require vendors to disclose model purpose, data sources, known limitations, bias testing, and audit support. Include service-level expectations for corrective actions.
  • Assign clear ownership: Name a responsible official (program + legal + IT) for each system. Set review cadences and sign-off checkpoints.
  • Train your workforce: Provide practical training on responsible AI use, bias mitigation, and documentation standards. A curated list of current programs can help accelerate this step: Latest AI Courses.
  • Prepare a communications plan: Draft FAQs and public summaries that explain why a system is used, how it's governed, and how the public can raise concerns.

Open Questions to Track

  • Effective dates and compliance timelines.
  • How "AI system" is defined and any exemptions (e.g., low-risk tooling, back-office automation).
  • Enforcement mechanisms, oversight authority, and penalties for non-compliance.
  • Interaction with existing laws and policies (e.g., public records, privacy, student protections, law enforcement procedures).
  • Whether the legislature will negotiate post-signing changes and what those changes include.

Helpful Resources

The expectations are clear: know your systems, show your work, and keep people in the loop. If you start with an inventory, public summaries, and practical bias checks, you'll be in a strong position when agencies issue guidance or timelines.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide