Hochul set to sign RAISE Act, 72-hour AI safety alerts and million-dollar fines for Big Tech

NY's RAISE Act puts big AI on a 72-hour safety clock or risk million-dollar fines. A new oversight office signals a tougher line than California.

Categorized in: AI News Legal
Published on: Dec 22, 2025
Hochul set to sign RAISE Act, 72-hour AI safety alerts and million-dollar fines for Big Tech

New York's RAISE Act sets a tougher AI safety bar

Gov. Kathy Hochul is set to sign New York's first major AI safety law, according to POLITICO. The measure, known as the RAISE Act, puts large AI developers on a clear compliance clock and signals a stricter posture than other states.

The headline rule: developers of powerful AI tools must report major safety issues within 72 hours. Miss the deadline and they face million-dollar fines.

The law targets industry leaders including OpenAI, Meta, Google, and Microsoft. It also establishes a new oversight office to monitor compliance and enforce penalties.

Compared to California's similar law, New York's version is described as stricter. It arrives as states push back on President Trump's efforts to weaken AI regulation.

What legal teams should focus on now

  • Stand up an incident reporting playbook built around the 72-hour window. Define internal triggers, escalation paths, and sign-off authority.
  • Map where your organization is a "developer" versus a user or integrator. Exposure hinges on role, not just use of AI.
  • Refresh contract terms with AI vendors and partners to address incident notice, cooperation, and audit rights. Flow-down obligations where appropriate.
  • Tighten logging, monitoring, and evidence preservation to support rapid triage and defensible reports.
  • Prepare jurisdictional variance plans. New York is stricter than California; expect different reporting thresholds and timelines.
  • Assign regulatory liaison(s) for the new New York oversight office and maintain ready-to-send notification templates.

Key provisions to track in the final text

  • Definitions for "powerful AI tools" and "major safety issues." These will determine scope and when the clock starts.
  • The authority and process of the new oversight office, including guidance, audits, and enforcement procedures.
  • Penalty framework, mitigation factors, and how repeated noncompliance will be treated.
  • Interaction with other state and federal requirements, and any preemption or safe harbor language.

Action items for in-house and outside counsel

  • Run a tabletop exercise simulating a safety incident and a 72-hour regulatory report.
  • Inventory AI systems, model providers, and internal builds; tag systems that could fall under the Act.
  • Draft internal definitions for "reportable" events pending state guidance to drive consistent triage.
  • Align board and executive briefings on risk exposure, disclosure obligations, and budget needs for compliance.

Why this matters

New York is setting a higher bar on speed and accountability in AI incident reporting. For developers and enterprises building with frontier models, the operational lift will come from detection, documentation, and cross-functional coordination more than the report itself.

Getting the process right now will save money later-especially where million-dollar penalties are in play and multi-state rules conflict.

Looking to upskill legal and compliance teams on AI systems, risks, and governance? Explore curated options by job role at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide