New York's AI Rules and Health Data Push: What Leaders in Marketing and Management Need to Know
Here's the short version: New York passed the Responsible AI Safety and Education (RAISE) Act and vetoed the New York Health Information Privacy Act (NYHIPA). One narrows in on the most advanced AI systems. The other would have set strict rules for health-related data but didn't make it over the finish line. Both send clear signals for how to build, buy, and market with AI in New York.
The RAISE Act: Focused on "frontier" AI
The RAISE Act targets only the highest-end AI models. Think systems trained at huge scale-more than 10^26 training operations and compute spend over $100M. This is about model developers and deployers at the top tier, not everyday automation.
- Large language models trained at massive scale (e.g., GPT-4-class, Claude-class, Gemini-class)
- Generative AI that can create highly realistic video/audio, including synthetic voices and deepfake-quality media
- Advanced medical or scientific models used for diagnostics, discovery, or simulations
Key obligations for covered "large developers": publish a safety and security protocol (with limited redactions), evaluate whether deployment could cause "critical harm," and report qualifying safety incidents to the New York Attorney General within 72 hours. Enforcement sits with the Attorney General-no private lawsuits. The law takes effect January 1, 2027.
For buyers of frontier models, expect contractual flow-downs: audit rights, usage restrictions (e.g., limits on synthetic media), and proof of safety processes. Legal teams and procurement will feel this first, but product, brand, and growth teams will live with the outcomes.
Marketing and management implications
- Ask vendors for safety documentation, incident-reporting commitments, and guardrails on synthetic media and impersonation.
- Update your AI-use policy: who can prompt what, with which models, and how outputs are reviewed before publishing.
- Stand up an AI incident process aligned to 72-hour reporting norms. Run a tabletop drill.
- Label or watermark AI-generated content where feasible. Set internal rules for voice cloning, likeness, and endorsements.
- Add brand, legal, and security stakeholders to your AI review council; record decisions and model choices.
If your team needs a shared framework for risk controls, the NIST AI Risk Management Framework is a helpful starting point. For enforcement context and consumer protection priorities, see the New York Attorney General.
NYHIPA vetoed: Why it still matters
NYHIPA didn't pass, but expect the idea to return in some form. The bill would have covered any entity processing health-related data tied to people in New York-regardless of HIPAA status-which is broader than many state health data laws.
- Strict limits on processing without express authorization and standalone, clear consent
- Bans on consent flows that steer or confuse users
- Research, development, and marketing excluded from "internal operations" (so training models or improving products with health data could require new authorization)
- Strong access and deletion rights, plus duty to notify downstream providers of deletions going back a year
If you touch health-adjacent data (marketing, apps, wearables, location, audience segments)
- Define what "health-related" signals you collect (e.g., reproductive health interests, pharmacy visits, step counts, symptom checkers, location near clinics).
- Separate consents: one for service delivery, another for advertising or AI training. Make "no" easy and visible.
- Limit data to what's essential. Shorten retention. Turn off sensitive trackers in high-risk contexts.
- Contract for deletion cascade: ensure vendors and downstream partners can erase within set timelines.
- Review adtech and data brokers for sensitive categories, segment inferences, and geofencing near healthcare locations.
What to do this quarter
- Inventory AI: which models you use, who provides them, and where outputs go in your content and product flows.
- Update vendor terms to reflect safety, audit, and usage restrictions that may stem from RAISE obligations.
- Refresh consent UX for any health-adjacent data. Keep it plain, separate, and reversible.
- Build a deletion cascade playbook. Test it with a small set of vendors.
- Train marketing, product, and legal on synthetic media risks and approval steps.
Why early action pays
Even narrow laws push broader changes through contracts, product decisions, and risk reviews. Teams that plan now will ship faster later, with fewer rewrites and cleaner audits. The path is simple: map your data, pick your models with intention, and operationalize consent and incident response.
Upskill your team
If you're formalizing AI skills across marketing and management, explore role-based paths and certifications:
Your membership also unlocks: