
The landscape of artificial intelligence governance in the United States shifted decisively this week as New York fully entered the arena of frontier model regulation. With the enactment of the Responsible AI Safety and Education (RAISE) Act, New York has joined California in establishing a rigorous state-level safety framework for the world's most advanced AI systems. The move signals the emergence of a de facto national standard driven by the country's two largest tech economies, filling the vacuum left by a stalled federal regulatory process.
For the AI industry, the message is clear: the era of voluntary self-regulation for frontier models is drawing to a close. As developers grapple with the implications of California’s Senate Bill 53 (SB 53), signed late last year, New York’s RAISE Act adds a second, slightly different layer of compliance obligations. While Governor Kathy Hochul has emphasized the "alignment" between the two states, the nuances in the New York legislation create a complex compliance environment that will require significant strategic adjustment from major AI labs.
The RAISE Act (S6953B/A6453B) focuses explicitly on "frontier models," defined by a compute threshold of $10^{26}$ floating-point operations (FLOPs). This high bar currently captures only the most powerful systems developed by industry giants, such as the successors to GPT-4, Claude 3, and Gemini Ultra. By targeting this specific stratum of technology, New York aims to mitigate catastrophic risks—such as the potential for AI to aid in cyberattacks or biological weapon creation—without stifling innovation in the broader, lower-risk AI ecosystem.
Under the new law, developers of covered models must adhere to a strict set of safety and transparency protocols. The legislation mandates that these companies implement "reasonable care" to prevent their models from causing critical harm. This includes rigorous pre-deployment testing, the implementation of cybersecurity safeguards to prevent model theft, and the capability to promptly shut down a model if it behaves dangerously—a provision often referred to as a "kill switch" requirement.
Critically, the RAISE Act establishes a dedicated oversight office within the New York State Department of Financial Services (DFS). This office is empowered to register frontier model developers, review their annual compliance certifications, and issue regulations interpreting the statute's broad safety mandates. The choice of the DFS, a regulator known for its aggressive enforcement in the financial sector, suggests that New York intends to take a proactive "policing" approach to AI safety.
While the RAISE Act was drafted with an eye toward California’s SB 53 to prevent a chaotic regulatory patchwork, "alignment" does not mean "identical." Both laws share the same compute threshold ($10^{26}$ FLOPs) and the core philosophy of "transparency and preparedness," but they diverge in enforcement mechanisms and specific reporting timelines. These differences are where legal teams at major AI labs will likely face the most friction.
The following table outlines the critical differences between the two state frameworks:
| Feature | New York (RAISE Act) | California (SB 53) |
|---|---|---|
| Target Scope | Frontier Models (> $10^{26}$ FLOPs) | Frontier Models (> $10^{26}$ FLOPs) |
| Incident Reporting Window | 72 hours from determination | 15 days (standard); 24 hours (imminent threat) |
| Primary Oversight Body | Department of Financial Services (DFS) | Attorney General & Govt. Ops Agency |
| Enforcement Mechanism | Civil action by AG; DFS administrative rules | Civil action by AG |
| Civil Penalties | Up to $1 million (1st violation); $3 million (subsequent) | Up to $1 million per violation (capped) |
| Private Right of Action | No | No |
The most significant operational difference lies in the incident reporting timeline. New York’s requirement that developers report "critical safety incidents" within 72 hours is considerably more aggressive than California's standard 15-day window. This compressed timeline demands that AI companies have mature, 24/7 incident response capabilities that can not only detect anomalies but also legally assess and report them in near real-time.
Furthermore, the involvement of the New York DFS introduces a new regulator into the technology space. Unlike California, which relies largely on the Attorney General for enforcement, New York has created an administrative structure that could promulgate detailed rules on how safety testing must be conducted. This raises the prospect of a "dual-track" compliance regime where a model might pass California’s transparency requirements but fail New York’s specific safety protocols if the DFS adopts a more prescriptive interpretation of "reasonable care."
Industry analysts have noted that while the definitions of "critical harm" are harmonized, the procedural divergence creates a "highest common denominator" effect. To be safe, developers will likely default to New York’s stricter 72-hour reporting standard and California’s broader transparency documentation, effectively merging the toughest aspects of both laws into a single internal compliance protocol.
The enactment of the RAISE Act comes at a moment of significant uncertainty at the federal level. With the current administration rolling back previous executive orders on AI safety to pursue a more deregulation-focused agenda, states have stepped in to fill the void. This phenomenon mirrors the "Brussels Effect," where a strict regulatory jurisdiction sets the standard for the wider market. In this case, it is a "bi-coastal effect," with Sacramento and Albany effectively writing the national rulebook for AI safety.
Legal experts warn that this state-led approach, while providing necessary guardrails, risks fragmentation if other states like Colorado, Texas, or Massachusetts enact their own frontier model laws with different thresholds or definitions. However, given the economic weight of New York and California—home to the vast majority of the US AI industry—it is likely that their combined framework will become the de facto national standard for the foreseeable future.
For the AI industry, the clock is now ticking toward the effective dates in 2027. The immediate priority for Chief Technology Officers and General Counsels at frontier labs is to conduct a "gap analysis" between their current internal safety practices and the statutory requirements of the RAISE Act and SB 53.
Strategic priorities for 2026 include:
As 2026 progresses, the implementation of the RAISE Act will be a critical test case for whether state-level regulation can effectively govern a technology as fluid and global as artificial intelligence. For now, New York has planted its flag, ensuring that the path to AGI runs through Albany as much as it does through Silicon Valley.