New York Poised to Be at the Forefront of AI Regulation: Five Bills Await Gov. Hochul's Action
New York's 2025 session produced a slate of AI bills that, if signed, will set a tougher standard for developers, platforms, advertisers, and public entities. The governor has until January to approve or veto and is soliciting stakeholder input now.
"Whoever leads in the AI revolution will lead the next generation of innovation and progress⦠New York is not just keeping pace with the AI revolution - we are setting the standard for how it should be done." The intent is clear: innovation with accountability, backed by penalties and public disclosure.
The Responsible AI Safety and Education Act (RAISE Act) (S.6953-B/A.6453-B)
This bill targets high-end model developers and imposes safety, security, and disclosure duties.
- Coverage: Developers with training costs exceeding $100 million (note: threshold is based on training spend, not revenue).
- Core obligations: Implement safety measures to prevent misuse; submit safety and security policies describing how the company reduces risk of critical harm; certain models are explicitly barred from deployment.
- Penalties: Up to $10 million for a first violation and $30 million for repeat violations.
- Context: Mirrors California's SB 53 in several areas but goes further on scope and documentation.
Compare with California SB 53.
Action for counsel:
- Confirm if any in-scope models meet the $100M training cost threshold (internal and vendor models).
- Map "critical harm" risks and document mitigations; formalize red-teaming, incident response, and model gating procedures.
- Draft the required safety and security policies; align export controls, model cards, and usage policies.
- Review product roadmaps for models that could be prohibited; pause launches until cleared.
- Budget for compliance audits and repeat testing post-updates.
Warning Labels for Certain Social Media Platforms (S.4505/A.5346)
Platforms that deploy specific design features-such as addictive feeds or infinite scroll-would need warning labels at every access point.
- OMH sets the warning label content; operators must implement labels consistently.
- Penalties: Up to $5,000 per violation.
- Follows similar moves in other states and complements NY's Child Data Protection Act and SAFE for Kids (AG draft regs pending).
Action for counsel:
- Inventory affected UX patterns (infinite scroll, autoplay, engagement loops).
- Design and QA label placement at each entry point; prepare localization and accessibility versions.
- Update terms, parental controls, and age-assurance flows to align with New York's youth protections.
- Train moderation and product teams on enforcement and recordkeeping for audits.
Synthetic Performers in Advertising (S.8420-A/A.8887-B)
Commercial ads using synthetic performers-digitally created media that appear as real people-must disclose that use or face penalties.
- Disclosure required by the ad creator or producer; exceptions exist for ads that are solely audio.
- Penalties: Up to $5,000 per violation.
- Builds on New York's election "deep fake" rules by covering commercial advertising.
Action for counsel:
- Add a "synthetic performer" review checkpoint to ad production workflows and agency MSAs.
- Standardize on-screen disclosure language and placement; archive evidence of compliance per asset.
- Address SAG-AFTRA and likeness clauses in talent agreements to avoid conflicts.
Expansion of Right of Publicity Statute (S.8391/A.8882)
New York's 2020 right of publicity required consent for use of a deceased person's name, image, or likeness for commercial purposes, and allowed disclaimers for expressive audiovisual works. This bill would tighten that rule.
- Prior consent from heirs would be required for use of a deceased person's voice or likeness in media (not just a disclaimer).
Action for counsel:
- Refresh clearance procedures for documentaries, biopics, training data, and post-production voice models.
- Confirm estate authority and chain of title; document consent scope and duration.
- Adjust E&O insurance, indemnities, and vendor warranties to cover synthetic voice/likeness use.
Expansion of the LOADinG Act (S.7599-C/A.8295-D)
New York's 2024 law limited how state agencies use automated decision-making systems. The expansion would widen reach and transparency.
- Scope: Extends requirements to additional governmental entities, including local governments and certain educational institutions.
- Disclosures: Broadens what must be reported about automated decision-making tools.
- Transparency: The NYS Office of Technology Services must maintain a public inventory of automated tools in use.
NYS Office of Technology Services
Action for counsel (public entities and vendors):
- Inventory all ADM tools (procured and internally developed); document purpose, datasets, fairness testing, and human-in-the-loop controls.
- Prepare public-facing summaries; establish a cadence to keep the inventory accurate.
- Update procurement clauses: audit rights, transparency artifacts, incident notice, and bias remediation.
What to Do Before January
- Run a gap assessment against each bill; flag high-cost models and high-risk use cases first.
- Stand up cross-functional governance (legal, security, product, compliance) with decision rights and paper trails.
- Refresh vendor diligence: training costs, eval results, safety policies, and downstream usage restrictions.
- Draft playbooks: disclosures, warnings, consent capture, takedown/remediation, and regulator response.
- Prepare comment letters for stakeholder outreach while the Governor's office reviews input.
If enacted, these measures will demand real documentation, real controls, and real enforcement. Teams that start building the evidence now will move faster later-without scrambling under penalty risk.
Your membership also unlocks: