On December 19, 2025, just eight days after President Trump issued an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence” to challenge burdensome state laws that regulate artificial intelligence (the “December 2025 EO”), New York Governor Kathy Hochul signed the Responsible Artificial Intelligence (“AI”) Safety and Education Act (the “RAISE Act”). The RAISE Act imposes transparency, compliance, safety, and reporting requirements on certain developers of large “frontier” AI models (defined below). The final version of the RAISE Act has not yet been released, and it will take effect on January 1, 2027.
The RAISE Act closely tracks California’s Transparency in Frontier Artificial Intelligence Act (TFAIA), and it is the second major state AI legislation to be enacted since the President’s Executive Order 14179, signed on January 23, 2025, which directs federal agencies to eliminate regulatory barriers for AI innovation and development. California’s TFAIA was signed into law on September 29, 2025, and took effect on January 1, 2026.
The RAISE Act significantly expands the authority of the New York Department of Financial Services (“DFS”) by establishing a new office in DFS dedicated to AI. DFS now has direct regulatory authority to oversee large AI developers operating in New York, not just those in financial services. DFS is charged with issuing rules and setting standards for frontier AI model safety, transparency, and incident reporting, and with assessing fees, addressing enforcement, and publishing an annual report on AI safety.
Despite the Administration’s efforts to stymie state AI legislation, New York has joined California arm-in-arm to regulate large AI models by focusing on security and transparency. Colorado previously enacted the Colorado Artificial Intelligence Act to regulate high-risk AI systems by focusing on algorithmic discrimination, which is scheduled to become effective on June 30, 2026. Utah and Texas have passed limited AI-related laws and other states are sure to follow. All of this sets the stage for an impending challenge by the Administration in the coming months to try to prevent a patchwork of differing state AI regulation.
Scope and Requirements of the RAISE Act
The RAISE act covers “large developers” of “frontier” AI models that are developed, deployed, or operating in New York. “Large developers” under the RAISE Act are those with more than $500 million in annual revenue. “Frontier” AI models refer to the most advanced, large-scale AI systems that meet certain thresholds, including AI models that are trained using more than 1026 computational or floating operations (known as “FLOPs”), and that cost over $100 million to train and develop. The AI models that currently are subject to the RAISE Act include OpenAI (Chat GPT), Anthropic (Claude), Google (Gemini), Microsoft (Copilot), and Meta (LLaMa).
Before deploying a frontier model, covered AI developers must develop, publicly disclose, and maintain written safety and security protocols that provide, among other things:
- Reasonable administrative, technical and physical cybersecurity protections that reduce the risk of unauthorized access to, or misuse of, the model leading to critical harm;
- Protections and procedures that reduce the risk of “critical harm,” which is defined as the death or serious injury of at least 100 people or at least $1 billion in damages; and
- Testing procedures to evaluate if the model poses an unreasonable risk of critical harm or could be used to create another frontier model in a manner that would increase the risk of critical harm.
These safety and security protocols have to be tested annually, and documentation must be submitted to the New York Attorney General and the Division of Homeland Security and Emergency Services (“DHSES”). The written protocols must be maintained by a covered AI developer for as long as the frontier model is deployed, and an additional five years afterwards.
Incident Reporting
The RAISE Act requires covered AI developers to report a “safety incident” relating to a frontier model, which includes an unauthorized access, misuse of the model, or a critical control failure, to the New York Attorney General and DHSES within 72 hours of discovery. Events must be reported if they are reasonably believed to be a safety incident, and the report must include an explanation as to why the event qualifies as a safety incident. Notably, the reporting obligations in the RAISE Act are significantly stricter than those under California’s TFAIA, which gives AI developers 15 days to report an incident and only covers events in which there is definitive knowledge that an incident occurred.
Enforcement/Penalties
The RAISE Act does not include a private right of action, and it gives the New York Attorney General the exclusive enforcement authority. The Attorney General is authorized to bring civil actions against covered AI developers for failing to submit the required reporting or for making false statements, and penalties can be up to $1 million for a first violation and up to $3 million for subsequent violations.
Federal Challenge
The December 2025 EO is a follow up to Executive Order 14179. The December 2025 EO announced an Administration policy to establish a “minimally burdensome” national standard for AI (Sec. 2). The December 2025 EO does not define what is a “minimally burdensome” standard, and it recognizes that there is not yet a national standard. Despite this lack of clarity, the December 2025 EO orders the Secretary of Commerce, within 90 days and in consultation with various advisors, to evaluate and identify any state AI laws that are not “minimally burdensome.” This evaluation should identify laws that require AI models to alter their truthful outputs, or that require them to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution (Sec 4).
The December 2025 EO also orders the U.S. Attorney General, within 30 days, to establish a new AI Litigation Task Force to challenge “onerous [state] laws that conflict with the policy” (Sec. 3). Any state having such onerous AI laws is subject to losing its congressionally approved non-deployment funds under the Broadband Equity, Access, and Deployment Program, which provides funding for workforce development, enhancing 911 networks, and AI-supportive telecommunications infrastructure, among other things (Sec. 5).
Given the Administration’s clearly stated policy of restricting and pre-empting state AI regulation, it seems likely that the AI Litigation Task Force will challenge the RAISE Act, TFAIA, and possibly other state AI laws. The outcome, however, is less clear. The state law requirements to disclose and report AI security protocols and incidents are not dissimilar from other regulatory reporting frameworks, including those involving workplace safety, clinical/pharmaceutical trials, and chemical facilities, so these laws may not violate the First Amendment. However, the mandated transparency disclosures, at least those under TFAIA, include a broader scope of traditionally internal information, such as internal risk management assessments and mitigation plans. Other design requirements in state AI laws, like testing thresholds, design criteria, and model training, could be interpreted to be “onerous and excessive” and, thus, to be in conflict with the policy espoused in the December 2025 EO.
And it may not be long before the RAISE Act and other state AI laws are challenged. On January 9, 2026, the Justice Department announced that the AI Litigation Task Force called for in the December 2025 EO has been established.