On December 19, 2025, just eight days after President Trump issued an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence” to challenge burdensome state laws that regulate artificial intelligence (the “December 2025 EO”), New York Governor Kathy Hochul signed the Responsible Artificial Intelligence (“AI”) Safety and Education Act (the “RAISE Act”). The RAISE Act imposes transparency, compliance, safety, and reporting requirements on certain developers of large “frontier” AI models (defined below). The RAISE Act takes effect March 19, 2026.
The RAISE Act closely tracks California’s Transparency in Frontier Artificial Intelligence Act (the “TFAIA”), and it is the second major state AI legislation to be enacted since the President’s Executive Order 14179, signed on January 23, 2025, which directs federal agencies to eliminate regulatory barriers for AI innovation and development. California’s TFAIA was signed into law on September 29, 2025, and took effect on January 1, 2026.
Despite the Administration’s efforts to stymie state AI legislation, New York has joined California arm-in-arm to regulate large AI models by focusing on security and transparency. Colorado previously enacted the Colorado Artificial Intelligence Act to regulate high-risk AI systems by focusing on algorithmic discrimination, which is scheduled to become effective on June 30, 2026. Utah and Texas have passed limited AI-related laws and other states are sure to follow. All of this sets the stage for an impending challenge by the Administration in the coming months to try to prevent a patchwork of differing state AI regulation.
Scope and Requirements of the RAISE Act
The RAISE Act covers “large developers” of “frontier” AI models that are developed, deployed, or operating in New York. “Large developers” are those that have (i) trained at least one frontier model and (ii) spent more than $100 million in aggregate compute costs training frontier models generally. “Frontier” AI models refer to the most advanced, large-scale AI systems that that are trained using more than 10^26 computational or floating operations (known as “FLOPs”), and that cost over $100 million in compute costs to train and develop. They also include models distilled from these larger models when compute costs from the distillation exceed $5 million. The AI models that currently are subject to the RAISE Act include OpenAI (Chat GPT), Anthropic (Claude), Google (Gemini), Microsoft (Copilot), and Meta (LLaMa).
Before deploying a frontier model, covered AI developers must develop, publicly disclose, and maintain a written safety and security protocols that provides, among other things:
- Reasonable administrative, technical and physical cybersecurity protections that reduce the risk of unauthorized access to, or misuse of, the model leading to “critical harm,” which is defined as the death or serious injury of at least 100 people or at least $1 billion in damages arising from (i) the creation or use of a chemical, biological, radiological, or nuclear weapon; or (ii) a frontier model autonomously committing or enabling a crime requiring intent, recklessness, or gross negligence , or the solicitation or aiding and abetting of such a crime;
- Protections and procedures that reduce the risk of critical harm; and
- Testing procedures to evaluate if the model poses an unreasonable risk of critical harm or could be used to create another frontier model in a manner that would increase the risk of critical harm.
These safety and security protocols have to be reviewed and updated annually to account for changes in the model’s capabilities and industry best practices. They must also be published and submitted to the New York Attorney General and the Division of Homeland Security and Emergency Services (“DHSES”) with appropriate redactions to protect public safety, employee and customer privacy and confidential information. The written protocols must be maintained by a covered AI developer for as long as the frontier model is deployed, and an additional five years afterwards.
Incident Reporting
The RAISE Act requires covered AI developers to report a “safety incident” involving a frontier model to the New York Attorney General and DHSES within 72 hours of discovery. A safety incident is either a known instance of critical harm or certain events that increase the risk of critical harm. These events include a frontier model acting autonomously without a user request, unauthorized use of the model, unauthorized disclosure of the model’s weights, or a critical control failure. The report must include an explanation as to why the event qualifies as a safety incident.
Notably, the reporting obligations in the RAISE Act are significantly stricter than those under California’s TFAIA, which gives AI developers 15 days to report an incident and only covers events in which an incident actually occurred, unless the incident poses an imminent risk of death or serious physical injury. In that case, the TFAIA requires the developer to disclose the incident within 24 hours to the appropriate authority based on the nature of the incident and applicable law.
Enforcement/Penalties
The RAISE Act expressly prohibits a private right of action and gives the New York Attorney General enforcement authority. Penalties include up to $10 million per initial violation and $30 million for subsequent violations.
Federal Challenge
The December 2025 EO is a follow up to Executive Order 14179. The December 2025 EO announced an Administration policy to establish a “minimally burdensome” national standard for AI (Sec. 2). The December 2025 EO does not define what is a “minimally burdensome” standard, and it recognizes that there is not yet a national standard. Despite this lack of clarity, the December 2025 EO orders the Secretary of Commerce, within 90 days and in consultation with various advisors, to evaluate and identify any state AI laws that are not “minimally burdensome.” This evaluation should identify laws that require AI models to “alter their truthful outputs,” or that require them to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution (Sec 4).
The December 2025 EO also orders the U.S. Attorney General, within 30 days, to establish a new AI Litigation Task Force to challenge “onerous [state] laws that conflict with the policy” (Sec. 3). Any state having such onerous AI laws is subject to losing its congressionally approved non-deployment funds under the Broadband Equity, Access, and Deployment Program, which provides funding for workforce development, enhancing 911 networks, and AI-supportive telecommunications infrastructure, among other things (Sec. 5).
Given the Administration’s policy of restricting and pre-empting state AI regulation, it seems likely that the AI Litigation Task Force will challenge the RAISE Act, the TFAIA, and possibly other state AI laws. The outcome, however, is less clear. And it may not be long before the RAISE Act and other state AI laws are challenged. On January 9, 2026, the Justice Department announced that the AI Litigation Task Force called for in the December 2025 EO has been established.
