New York Passes the Responsible AI Safety and Education Act
New York's legislature has approved the Responsible AI Safety and Education Act (SB6953B), known as the RAISE Act. The bill is now awaiting Governor Kathy Hochul’s signature before it becomes law.
Who Does the RAISE Act Affect?
The act targets “large developers,” defined as entities that have trained at least one frontier AI model and spent over $100 million on compute costs for training such models. A “frontier model” refers to AI models trained with over 1026 computational operations and costing more than $100 million to build, or models created by applying “knowledge distillation” to a frontier model with compute costs exceeding $5 million.
Knowledge distillation is a supervised learning technique where a smaller AI model is trained using a larger model or its outputs, aiming for similar performance.
Key Requirements for Large Developers
Ban on Deploying Dangerous Frontier Models
The RAISE Act prohibits deploying any frontier model that poses an unreasonable risk of “critical harm.” This includes situations where the model could lead to the death or serious injury of 100 or more people, or cause at least $1 billion in property or monetary damage. Critical harm may result from:
- The use or creation of chemical, biological, radiological, or nuclear weapons;
- AI behavior acting autonomously without meaningful human intervention that would constitute a criminal act under New York law if done by a person, involving intent, recklessness, or gross negligence.
Documentation and Transparency Before Deployment
Before launching a frontier model, large developers must:
- Create a written safety and security protocol;
- Keep an unredacted copy of this protocol, including records of any updates, for the duration of the model’s deployment plus five years;
- Publish a redacted version of the protocol publicly and provide copies to the New York Attorney General and the Division of Homeland Security and Emergency Services (DHS), allowing the AG access to the full version on request;
- Maintain detailed records of all testing procedures and results, sufficient for third parties to replicate the tests, for the model’s lifetime plus five years;
- Implement safeguards to reduce unreasonable risks of critical harm.
Annual Safety Reviews
Large developers must review their safety and security protocols annually. This review should consider any changes in the AI model’s capabilities and industry best practices. If significant changes are made, a redacted updated protocol must be published and shared with the relevant authorities.
Reporting Safety Incidents
Any safety incident involving a frontier model must be reported to the New York Attorney General and DHS within 72 hours of discovery. A safety incident includes critical harm or events indicating an increased risk of such harm, such as:
- Autonomous behavior without user request;
- Theft, unauthorized access, or accidental release of model weights;
- Failure of technical or administrative controls designed to limit model modifications;
- Unauthorized use of the frontier model.
Reports must include the date, reasons why the event qualifies as a safety incident, and a clear, concise description.
When Will the RAISE Act Take Effect?
If signed into law, the act will come into force 90 days after the governor’s approval.
For educators interested in AI and its safe implementation, the RAISE Act highlights how regulatory frameworks are evolving to ensure responsible AI development. Staying informed about such legislation can support curriculum development and foster critical discussions on AI safety in educational settings.
To explore AI courses that cover safety, ethics, and practical applications, visit Complete AI Training.
Your membership also unlocks: