
The regulatory landscape for artificial intelligence in the United States has undergone a seismic shift between 2023 and 2025. According to data from the Future of Privacy Forum (FPF), lawmakers across 14 states have enacted 27 distinct pieces of AI-related legislation, signaling a move away from broad, theoretical frameworks toward targeted, high-impact regulation.
As of early 2026, the majority of these laws are now effective, creating immediate compliance obligations for developers, deployers, and platforms. This wave of legislation addresses critical areas ranging from frontier model safety and catastrophic risk to chatbot transparency and the nonconsensual distribution of intimate imagery. For AI professionals and enterprises, understanding this patchwork of state and federal requirements is no longer a matter of future planning but of current legal necessity.
In 2024, the legislative trend leaned heavily toward comprehensive consumer data privacy laws with AI components. However, 2025 marked a distinct pivot toward narrower, risk-based measures. State legislatures, particularly in California and New York, have taken the lead in filling the federal regulatory vacuum by targeting specific technologies and use cases.
The 2025 legislative session was defined by a focus on "frontier models"—highly capable AI systems trained on vast amounts of computational power—and the specific risks associated with generative AI, such as deepfakes and emotional manipulation by chatbots. This targeted approach reflects a growing consensus among lawmakers that different AI modalities require distinct regulatory guardrails.
Two of the most significant legislative achievements in 2025 were California's Senate Bill 53 (SB 53) and New York's RAISE Act. Both laws establish the first substantial compliance regimes for developers of the most advanced AI systems, setting a de facto national standard for frontier model governance.
Signed into law in September 2025, California's SB 53 is the first U.S. statute specifically designed to address "catastrophic risk" from AI. The law applies to "frontier developers" creating models trained with more than 10^26 floating-point operations (FLOPs)—a threshold currently met only by the most advanced foundation models.
Key provisions include:
Following closely, New York enacted the Responsible AI Safety and Education (RAISE) Act in December 2025. Effective January 1, 2027, the RAISE Act mirrors California's compute thresholds (>10^26 FLOPs) but introduces a dedicated oversight office within the Department of Financial Services.
The RAISE Act requires:
Beyond foundational models, 2025 saw a surge in laws regulating user interaction with AI, particularly regarding chatbots and synthetic content.
Chatbot Transparency
Five states—California, Maine, New Hampshire, New York, and Utah—have enacted laws specifically addressing AI chatbots. These regulations mandate clear disclosure when a user is interacting with an AI system. New York's "AI Companion Model Law" and California's SB 243 go further, focusing on mental health protections. They require operators of "companion" chatbots to implement safety protocols to detect and mitigate user distress or suicidal ideation, reflecting concerns over the emotional impact of anthropomorphic AI.
The TAKE IT DOWN Act
At the federal level, the "TAKE IT DOWN Act," signed in May 2025, represents a rare bipartisan victory for AI regulation. This law criminalizes the creation and distribution of nonconsensual intimate imagery (NCII), including AI-generated deepfakes. It imposes strict "notice-and-takedown" obligations on platforms, requiring them to remove infringing content within 48 hours of a valid request, backed by the enforcement power of the FTC.
While many laws are already in effect, several critical regulations have phased implementation dates that businesses must track.
Effective Dates of Key AI Legislation
| Law | Jurisdiction | Effective Date |
|---|---|---|
| TAKE IT DOWN Act (Deepfake Removal) | Federal | May 19, 2026 |
| SB 205 (Colorado AI Act) | Colorado | June 30, 2026 |
| SB 1295 (Automated Decision Making) | Connecticut | July 1, 2026 |
| AB 853 (AI Transparency Act Amendments) | California | August 2, 2026 |
| RAISE Act (Frontier Model Safety) | New York | January 1, 2027 |
| AB 853 (Provenance Detection Requirements) | California | January 1, 2027 |
The enactment of these 27 laws creates a complex compliance environment for Creati.ai's audience of developers and business leaders. The era of voluntary self-regulation is effectively over for high-risk and frontier AI systems.
1. Provenance is Mandatory
With California's amended AI Transparency Act (AB 853) taking full effect in 2026 and 2027, developers of generative AI tools must integrate watermarking and provenance detection standards (such as C2PA) into their systems. Platforms hosting AI content will legally be required to label it, necessitating robust metadata management.
2. The "Brussels Effect" in the US
Just as the GDPR set a global standard for privacy, the alignment between California and New York on the 10^26 FLOPs threshold for frontier models suggests a unified domestic standard is emerging. Even companies not based in these states will likely adopt these safety frameworks to ensure market access, effectively nationalizing these state-level requirements.
3. Liability for Downstream Use
The TAKE IT DOWN Act and state-level chatbot laws introduce significant liability for platforms that host user-generated content. Companies must upgrade their content moderation infrastructure to handle rapid takedown requests and detect synthetic media, or risk federal criminal liability and substantial civil fines.
As 2026 progresses, we anticipate further legislative activity focusing on sector-specific applications in healthcare and employment. For now, the priority for AI organizations is to audit their current models against these new statutory definitions of "frontier" and "high-risk" systems to ensure they are not caught off guard by the effective dates looming in the latter half of the year.