RAISE Act In, NYHIPA Out: What New York's Moves Mean for AI and Privacy

New York just green-lit the RAISE Act for frontier models and put NYHIPA on hold. Teams should tune governance, update contracts, and test incident plans before 2027.

Published on: Jan 05, 2026
RAISE Act In, NYHIPA Out: What New York's Moves Mean for AI and Privacy

Recent Developments in AI and Privacy Legislation in New York State

New York sent two clear signals in its 2025 session: the Responsible AI Safety and Education (RAISE) Act is law, and the New York Health Information Privacy Act (NYHIPA) is on hold. For legal and product leaders, this is a cue to tune governance, update contracts, and pressure-test data practices. The clock is already ticking for teams building with advanced AI.

The RAISE Act: What's In, What's Out

The RAISE Act targets "frontier models" only-systems trained at extreme scale, generally more than 10^26 computational operations and over $100 million in compute costs. Most business tools won't qualify. But if you develop or integrate frontier systems, you're on the hook.

Examples that could be in scope include:

  • Large language models trained at massive scale (e.g., GPT-4-class, Claude-class, Gemini-class).
  • Generative systems producing highly realistic video/audio, including synthetic voices and deepfake-quality media.
  • Advanced medical or scientific models for diagnostics, drug discovery, or complex biological simulations.

Covered "large developers" must publish a safety and security protocol (with limited redactions), assess whether deployment poses an unreasonable risk of "critical harm," and report qualifying safety incidents to the New York Attorney General within 72 hours. There's no private right of action; enforcement sits with the Attorney General, with significant civil penalties. The law takes effect January 1, 2027.

If you license frontier models, expect pass-through obligations via contract: audit rights, usage limits, disclosures, and incident-reporting expectations. Plan for that friction now.

Action Plan for Legal Teams

  • Contract playbook: add NY-specific AI riders, audit rights, incident notice within 72 hours, model lineage disclosures, compute threshold representations, and clear flow-down to subprocessors.
  • Incident readiness: establish a single timer for AI safety incidents, security events, and other statutory clocks. Define severity tiers and an escalation path to counsel and product.
  • Documentation: maintain an inventory of models, training runs, compute budgets, providers, and deployment endpoints. Pin down what could cross "frontier" thresholds.
  • Policy alignment: map your controls to the NIST AI Risk Management Framework. Draft a public-facing safety protocol with redaction-ready sections.
  • Regulatory watch: monitor Attorney General guidance and potential rulemaking. Calibrate your roadmap to the January 2027 effective date.

Action Plan for Product Leaders

  • Design controls: build guardrails for synthetic media (e.g., provenance/watermarking), kill switches, rate limits, and abuse detection. Gate high-risk features behind approvals.
  • Evaluation: run red-team tests for critical harm scenarios. Instrument monitoring, incident taxonomies, and reproducible test suites tied to release criteria.
  • Model strategy: prefer smaller or fine-tuned models when they meet requirements. If you must use a frontier model, limit scope, add usage constraints, and isolate high-risk capabilities.
  • Data governance: log training/finetune sources, filter sensitive categories, and document exclusions. Keep a clear record of prompts, outputs, and known failure modes.

NYHIPA Vetoed: Why It Still Matters

NYHIPA didn't pass, but its concepts are a preview. The bill would have applied to almost any entity processing health-related information about a New York resident or someone in the state-regardless of HIPAA status. It set strict limits on processing without express authorization, required standalone consent, and banned consent flows that steer or confuse users.

It also excluded research, development, and marketing from "internal business operations," which means training or improving products with health-related data could have needed fresh authorization. Individuals would have gained strong access and deletion rights, including obligations to pass deletion requests downstream for the prior year. Expect a similar bill to resurface.

If You Touch Health-Adjacent Data, Do This Now

  • Data inventory: identify health-related data (including inferences), where it flows, and which vendors touch it. Tag anything tied to New York.
  • Consent flows: break out standalone consent, remove dark patterns, and add granular toggles for advertising, analytics, and AI training.
  • Training policy: define whether health-related data can be used for model training or product improvement. If yes, specify a separate authorization path.
  • Deletion operations: build procedures to notify downstream providers and third parties and confirm completion. Keep a one-year lookback.
  • Marketing hygiene: avoid sensitive audience building using health signals without explicit authorization. Audit SDKs, pixels, and server-side tracking.

Quick Checklist for 2026 Planning

  • RAISE readiness: owner assigned, model inventory built, safety protocol drafted, 72-hour incident playbook tested, and contractual templates updated.
  • Engineering alignment: evaluation criteria set for "critical harm," centralized logging, and red-team cadence on the release train.
  • Vendor management: frontier model disclosures, compute thresholds, audit rights, and termination/kill-switch terms for noncompliance.
  • Privacy posture: consent UX reviewed, health-adjacent data flagged, and deletion propagation automation in place.
  • Board and budget: fund testing, data tooling, and legal resourcing to meet the January 1, 2027 start date.

Helpful Resources

Upskilling Your Team

If your roadmap includes advanced AI or sensitive data use, train your teams on safe deployment, evaluation, and governance. Curated learning paths by role can speed this up.

Explore AI courses by job

This material is for general information and does not constitute legal advice.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide