
The landscape of artificial intelligence governance in the United States has entered a volatile new phase as the Trump administration initiates aggressive measures to curb state-level AI regulations. In a move that has sparked resistance from both Democratic and Republican state leaders, the White House is preparing to file lawsuits against states enforcing what it deems "burdensome" AI laws. This legal offensive is coupled with a significant financial threat: the withholding of billions of dollars in federal broadband grants.
The conflict centers on a December executive order directing the Department of Justice to challenge state laws on interstate commerce grounds. The administration argues that a patchwork of state regulations stifles innovation and imposes unnecessary compliance costs on American technology companies. However, state lawmakers view this as an overreach of federal power, arguing that in the absence of comprehensive congressional action, states have a duty to protect their citizens from the risks associated with rapidly evolving AI technologies.
At the heart of the administration's push is a desire to establish a unified, pro-innovation federal framework that supersedes state restrictions. David O. Sacks, the White House special adviser for AI and crypto, has been tasked with evaluating existing state AI laws. Under the terms of the executive order, Sacks and the Commerce Department must identify "onerous laws" within a 90-day window.
Once a state law is flagged as burdensome, the state could face two primary consequences:
The administration has specifically targeted laws that use "disparate impact" standards—which define discrimination based on the outcome of an AI system rather than the intent of its creators. The White House characterizes these standards as mechanisms that force entities to "embed ideological bias within models."
While the administration’s move might appear to target progressive strongholds, the backlash has bridged the partisan divide. Lawmakers in Republican-led states like Utah and Texas have joined counterparts in Colorado and California in defending their right to legislate.
Utah’s Republican Resistance
In Utah, the White House Office of Intergovernmental Affairs recently issued a memo opposing Utah HB 286. This bill seeks to require developers of large "frontier" AI models to publish safety and child protection plans. The administration labeled the bill "unfixable" and contrary to its AI agenda.
Despite this pressure, Republican State Representative Doug Fiefia has publicly criticized the executive order. Citing the 10th Amendment, Fiefia argued that while a national framework is desirable, it must come through Congress with transparency and debate. "Until that happens, states should be allowed to protect their people," Fiefia stated, emphasizing that executive overreach threatens the fundamental principles of federalism.
California and Colorado Stand Firm
In Democratic-controlled states, the resolve is equally stiff. Colorado is preparing for its landmark Colorado AI Act to take effect this summer. The law requires developers of high-risk AI systems to exercise reasonable care to prevent discrimination. Loren Furman, CEO of the Colorado Chamber of Commerce, indicated that the state legislature intends to move forward regardless of federal threats, noting that Colorado Attorney General Phil Weiser is prepared to litigate against the administration if necessary.
Similarly, California advocates view the executive order as a "harassment scheme." Teri Olle of Economic Security California Action, a group supporting California’s transparency laws, predicted the state would vigorously fight any lawsuits. She highlighted that public opinion strongly supports AI safety rules, even if it means slower development speeds.
The threat to withhold BEAD funding adds a complex layer to the dispute. Legal experts have raised questions about the administration's authority to unilaterally alter conditions for grants established by Congress.
Cody Venzke, a senior policy counsel for the ACLU, noted that the federal government has limited power to change grant terms after Congress has set them. However, the mere threat of losing hundreds of millions—or even over a billion—dollars in infrastructure funding places immense political pressure on state leaders.
For states like Texas, which was approved for $1.27 billion in broadband deployment funds, the choice is stark. David Dunmoyer of the Texas Public Policy Foundation described the dilemma: "If it came down to, you pick, keep the AI law or connect the disconnected in vulnerable and rural communities, that’s a tremendously hard political decision to make."
The following table illustrates the diverse regulatory approaches currently under federal scrutiny:
| State | Key Legislation | Primary Focus | Status |
|---|---|---|---|
| Colorado | Colorado AI Act | Preventing algorithmic discrimination in high-risk systems (employment, housing, healthcare). Uses "disparate impact" standard. | Effective Summer 2026 Under federal review |
| Utah | HB 286 (Frontier Models) | Requires safety and child protection plans for large frontier AI models. | Opposed by White House memo Passed House Committee |
| California | SB 53 (Transparency) | Mandates disclosure of AI safety frameworks and catastrophic risk assessments for frontier developers. | Passed Legislature Likely target of lawsuits |
| Texas | HB 149 (Responsible AI) | Prohibits AI development with intent to discriminate. Bans "social scoring" by government. | Signed into law Mixed alignment with federal EO |
The technology industry finds itself in a precarious position. While many tech CEOs favor a single federal standard to avoid a "patchwork" of 50 different state laws, the uncertainty created by this legal warfare is unsettling.
Some industry leaders are actively funding efforts to defeat pro-regulation candidates, while others warn that total deregulation could lead to catastrophic risks. The administration's aggressive stance suggests that the coming year will be defined by high-stakes litigation.
If the Department of Justice follows through with lawsuits, the courts will need to decide the extent of federal preemption over AI. Meanwhile, states are not pausing their efforts. New bills are advancing in Florida, Washington, and Virginia, signaling that despite the threats from Washington, the drive for local AI governance remains robust.
The outcome of this standoff will determine not only who controls AI policy in the United States but also whether the federal government can effectively use infrastructure funding as a cudgel to enforce its regulatory agenda on the states.
As the 90-day evaluation period for David O. Sacks and the Commerce Department progresses, the tech world waits to see which state will be the first legal target. Whether this strategy results in a streamlined national policy or a protracted constitutional crisis remains to be seen. For now, the message from state capitals—Red and Blue alike—is clear: they will not cede their legislative authority without a fight.