New York Reaches Deal on AI Regulation: What IT, Developers, and Legal Teams Need to Do Now
New York has enacted the Responsible Artificial Intelligence Safety and Education (RAISE) Act, with a deal in place to amend it early next year. The agreement moves the law closer to California's approach after intense pressure from major tech firms.
The state will keep core safety and transparency obligations for top AI model developers, but remove broader measures that would have captured more companies and imposed extra safety requirements. The governor signed the original bill and secured a commitment from lawmakers to pass her requested changes when they return to Albany.
"New York is once again leading the nation in setting a strong and sensible standard for frontier AI safety, holding the biggest developers accountable for their safety and transparency protocols," the governor said, calling the reforms necessary as the federal government "lags behind."
What changed in the deal
- Closer alignment with California's AI law at the urging of large technology companies.
- Narrower scope: fewer companies will fall under the strictest obligations.
- Reduced mandatory safety steps compared with the original proposal.
- Two-step process: signed now; amendments expected early next year.
Who this likely affects
- Frontier model developers and providers building or deploying advanced general-purpose models.
- Enterprises integrating high-capability models into products where misuse or failure could cause material harm.
- Legal, risk, and security teams responsible for compliance, disclosures, and incident handling.
Federal context and preemption risk
The law arrives days after a federal executive order aimed at limiting states' ability to regulate the AI industry. That action has drawn bipartisan criticism and will likely face court challenges. Expect a moving target: companies may need to prepare for state requirements while tracking potential federal preemption.
Practical steps for engineering, product, and legal teams
- Inventory models and use cases: identify covered systems, model versions, and where they are integrated (products, internal tools, vendor APIs).
- Document safety protocols: red-teaming, evaluation suites, adversarial testing, and safety mitigations tied to identified risks.
- Strengthen transparency: model/system cards, known limitations, intended use, release notes, and change logs.
- Vendor diligence: add AI safety and disclosure clauses to contracts; require eval results and incident reporting from third-party model providers.
- Incident response: define what counts as a "safety incident," escalation paths, notification timelines, and post-incident reviews.
- Data governance: track training data sources, licensing, privacy constraints, and dataset updates for reproducibility and audit trails.
- Access controls: restrict powerful capabilities (fine-tuning, tool use, elevated model settings) and log high-risk actions.
- Risk reviews: include AI-specific checkpoints in product launch gates and change management.
Compliance artifacts to have ready
- Model cards and system cards for major releases.
- Red-team and evaluation reports with methodology and findings.
- Safety policies, user safeguards, and misuse monitoring summaries.
- Third-party attestations or reports where available.
- Versioned documentation of mitigations tied to specific risks.
What to watch next
- Final amendment text in early-year legislative sessions: scope thresholds, definitions of "frontier" or "covered" models, and reporting duties.
- Potential rulemaking: templates, deadlines, and enforcement mechanics.
- California-New York parity: how closely the obligations match (to reduce multi-state overhead).
- Federal litigation or guidance that could reshape state authority.
Why this matters for teams
This deal points to a compliance baseline forming across large states: safety evaluations, transparency artifacts, and accountability for high-capability models. Even if you're not a frontier developer, your product or vendor stack may bring you into scope-plan accordingly.
Helpful frameworks
- NIST AI Risk Management Framework for building risk controls and documentation that regulators expect.
Level up your team
If you need structured enablement for engineers and counsel on AI safety and compliance, see curated programs here: AI certifications and training.
Your membership also unlocks: