New York Laws "RAISE" the Bar in Addressing AI Safety: The RAISE Act and AI Companion Models
New York moved early in 2025 and finished the year with two marquee laws: an AI Companion Model law effective November 5, 2025, and an amended Responsible Artificial Intelligence Safety and Education (RAISE) Act signed December 19, 2025. Together, they set a higher floor for safety, disclosure, and accountability.
New York now sits alongside Colorado, California, and Utah in a growing patchwork of state AI statutes. Dozens of states passed AI-related laws in 2025, with more expected in 2026 despite fresh federal headwinds.
Why this matters for legal teams
These laws pull AI safety obligations out of policy decks and into enforceable state requirements. They also create new exposure for product, policy, and engineering decisions that used to be treated as "best practices."
If your organization develops, deploys, or integrates conversational AI or large-scale models with New York users or operations, you'll need concrete controls, audit trails, and incident processes-not just principles.
AI Companion Models: Scope and duties
The AI Companion Model law covers operators and providers of AI systems that simulate a sustained human-like relationship with users in New York. It applies where systems: (1) remember prior interactions to personalize engagement, (2) ask unsolicited emotion-based questions, and (3) sustain ongoing dialogue about personal matters.
"Emotional recognition algorithms" include AI that interprets emotion from text (e.g., NLP, sentiment analysis), audio (voice emotion AI), and video (facial movement, gait, physiological signals). Exclusions cover systems used solely for customer service, internal purposes, employee productivity, or primarily for efficiency improvements, research, or technical assistance.
Key requirements
- Non-human disclosure: Provide clear, conspicuous notice (spoken or written) that the user is not interacting with a human. At least once per day and at least every three hours during continuing interactions.
- Self-harm protocols: Implement reasonable measures to detect and address expressions of suicidal ideation or self-harm. At minimum, refer users to an appropriate crisis service upon detection.
- Enforcement: The New York Attorney General may seek injunctions and civil penalties up to $15,000 per day for violations.
Action items for counsel
- Confirm product scope: does any feature simulate ongoing personal relationships or use emotional recognition?
- Lock in notice mechanics: wording, placement, and cadence that meet the "clear and conspicuous" standard across voice and text UIs.
- Document detection logic: thresholds, models, and human-in-the-loop escalation for self-harm cues.
- Build referral workflows: crisis-service routing, localization for New York users, and uptime expectations.
- Preserve evidence: logs showing notice delivery, model triggers, referrals, and post-incident reviews.
New York's approach is novel but not isolated. California's SB 243, effective January 1, 2026, tracks many of these duties with added focus on minors. See the statutory text for details: California SB 243.
RAISE Act: Frontier model obligations
The amended RAISE Act (effective January 1, 2027) targets "frontier models" developed, deployed, or operated in New York. Frontier models are extremely large-scale systems trained with more than 10^26 computational operations and compute costs over $100 million (and $5 million for certain distilled models).
Chapter amendments may add a $500 million revenue threshold to align with California's Transparency in Frontier AI Act. Final text was not publicly available as of January 8, 2026, so align plans with the current bill text and official releases, and update once chapter amendments publish.
Core duties (current text)
- Annual safety reviews and independent third-party audits; update protocols as needed.
- Public safety disclosures (with limited redactions). Provide access to the New York Attorney General and the Division of Homeland Security and Emergency Services.
- 72-hour incident reporting for defined safety incidents.
- Critical harm assessment: determine if a model could cause death or serious injury to 100+ people, at least $1 billion in damage, enable CBRN weapon design, or engage in conduct with limited human intervention that would be criminal if done by a person (with specified mens rea).
- Safety and security protocol: preventive controls and ongoing testing to mitigate critical harms.
- Recordkeeping: maintain specified records and reports.
Deployment of models posing an "unreasonable risk of critical harm" is prohibited. A new office within the New York Department of Financial Services will oversee AI development. The AG may seek civil penalties up to $1,000,000 for a first violation and up to $3,000,000 for subsequent violations in the amended framework (a reduction from the higher penalties in the current text). Whistleblower protections apply.
What this means for frontier developers
- Expect audit-grade documentation: model cards won't be enough-threat modeling, eval results, red-teaming artifacts, and change logs will be scrutinized.
- Incident response must be enterprise-grade: defined severity thresholds, legal hold triggers, and 72-hour reporting readiness.
- Third-party audit scope should include safety evals, data governance, secure training/inference environments, and release gates tied to risk tiers.
- Procurement and vendor contracts need clauses for data sharing, audit cooperation, and incident notice aligned to New York timelines.
Federal headwinds and preemption risk
A December executive order directed the DOJ to form an AI Litigation Task Force to challenge state laws on Interstate Commerce and First Amendment grounds and to threaten withholding federal funds from states with onerous AI rules. The order does not carry congressional preemptive force but creates legal uncertainty.
Expect states to tune timelines as challenges move through the courts. Plan for dual tracking: comply with near-term state mandates while preserving arguments and records in case federal preemption gains traction.
Multi-state signals to watch
- California's Transparency in Frontier AI Act for alignment on thresholds and disclosures. Reference text: California Legislative Information.
- Utah and Colorado laws governing chatbots more broadly; California's companion rules for minors.
- Enforcement posture: the Kentucky AG reportedly filed the first state suit against a companion chatbot company.
Compliance checklist for counsel
- Scoping: Inventory New York users and New York-based development or operations for coverage under each law.
- Notices: Lock notice frequency and scripts; verify accessibility in voice and text interfaces.
- Crisis protocols: Calibrate detection thresholds; define human escalation; contract with crisis services; test end-to-end routing.
- Safety governance: Create a written safety and security protocol with release gates tied to risk assessments.
- Audits: Select independent auditors; define scope; schedule pre-audit dry runs; remediate findings fast.
- Incident response: 72-hour reporting playbooks; roles and sign-offs; evidence retention; regulator communications.
- Recordkeeping: Standardize logs for notices, incidents, evals, and red-teaming; implement legal holds.
- Contracts: Update vendor and cloud terms for audit cooperation, incident notice, and safety testing rights.
- Cross-state alignment: Harmonize with California SB 243 and TFAIA; track minors' protections and any added thresholds.
- Board oversight: Brief directors on enforcement risk, capital needs for audits, and deployment gates that may delay launches.
Timeline and watch items
- AI Companion Model law: Effective November 5, 2025; enforcement active now.
- RAISE Act: Effective January 1, 2027; chapter amendments expected, with final text pending as of January 8, 2026.
- Federal litigation risk: Monitor DOJ activity and potential stays that could affect timing or scope.
Bottom line
New York is setting bright lines for AI companions and frontier models. Treat disclosures, crisis handling, testing, and audits as non-negotiable operational requirements, not future goals.
Stand up a cross-functional group now-legal, security, safety, product, and engineering-to meet New York's timelines and tighten documentation before audits and incident clocks start.
Need structured training for counsel and compliance teams? Curated programs can speed up playbook development and control design. Explore role-based options: Courses by Job.
Your membership also unlocks: