Let the States Teach the Federal Government How to Regulate AI
AP
X
Story Stream
recent articles

In a closely divided Congress, the so-called “One Big Beautiful Bill Act” (OBBBA) represents one of the few opportunities to legislate in 2025, albeit very likely on a partisan basis. At the heart of the OBBBA is the extension of existing individual tax cuts and newly proposed tax cuts; however, other legislative priorities have also found their way into the legislation. These legislative priorities include addressing the regulation of artificial intelligence (AI). 

Buried in the House-passed version of the OBBBA was a provision that prevents states from regulating AI for a period of ten years. The provision has garnered attention recently due to Rep. Marjorie Taylor Greene (R-GA), who recently posted on X that she would have voted against the OBBBA had she been aware of the provision. Sens. Marsha Blackburn (R-TN) and Josh Hawley (R-MO) have also expressed opposition

The proposed moratorium on state-level AI regulation received little attention in the House, considering its implications. Rep. Brendan Boyle (D-PA) mentioned it during the debate on OBBBA. “AI has the potential to be transformative, but only if it is developed and used in a safe, responsible way. That requires strong guardrails. This bill does the opposite,” said Boyle. “The fact this was quietly tucked into this budget bill is reckless and wrong.” 

The debate over the moratorium on state-level AI regulation is still happening in the Senate. The Senate Commerce, Science, and Transportation Committee's recommendations for OBBBA include the moratorium, rewritten in an effort to avoid being stripped out under the “Byrd rule.” The Senate Commerce Committee, chaired by Sen. Ted Cruz (R-TX), is tying the moratorium to $500 million in funding for the Broadband, Equity, Access, and Deployment (BEAD) Program. States that regulate AI wouldn’t be eligible for BEAD Program funds. 

Ironically, this brand of blatant federal coercion is something Cruz was once against. 

In 2016, Cruz criticized the Obama administration for tying federal dollars to the adoption of Common Core. During a March 2016 Republican presidential debate, he said, “The Obama administration has abused executive power in forcing Common Core on the states. It has used race‑to‑the‑top funds to effectively blackmail and force the states to adopt Common Core.” 

Cruz’s previous support for federalism doesn’t end there. He has similarly criticized federal coercion on states to adopt greenhouse gas emissions performance standards and tying Department of Transportation funding to states' implementation of what he called “radical environmental and racial equity requirements” in state and local transportation planning. He also called the Supreme Court’s 2015 decision in Obergefell v. Hodges, which legalized same-sex marriage in every state, the “very definition of tyranny.” 

The merits of Cruz’s positions on each of these issues are important. This author, however, strongly disagrees with his position on Obergefell. After all, the Equal Protection Clause matters. When it comes to AI, Cruz isn’t at all as concerned about coercion, or, to put it in his words, “blackmail” or “tyranny.”

The need for a framework is important, as compliance with state-level AI regulations can impose costs on businesses. That doesn’t mean placing a moratorium on state-level AI regulations is a reasonable replacement for Congress failing to do its job. States are the laboratories of policy innovation. Their examples can show Congress what does and doesn’t work when it comes to important policy issues such as AI regulation. Congress is, of course, notorious for crawling behind while technology sprints far ahead.  

And it’s not just members of Congress; recently, Secretary of Education Linda McMahon referred to AI as “A1.”  Considering the role AI can play in educating America’s children, particularly through personalized learning and tutoring, as well as its potential to undermine that education, the misstatement is alarming.

Just the same, we have to be mindful that AI isn’t without risks. For example, an AI model developed by Anthropic, Claude Opus 4, has shown the ability to blackmail and undermine its creators. This is the kind of red flag that should prompt robust debate, including by fueling quick policy action by the laboratories of our democracy — the states. Meanwhile, Congress has shown no ability to pass, let alone consider, legislation that provides a framework for AI regulation when the safety issues are in the here and now. 

It would be easier to understand preemption if Congress were on the path to considering a framework for accountability or regulation. Instead, the approach taken by Cruz and Rep. Jay Obernolte (R-CA), who is generally considered the AI lead for House Republicans, ties the hands of state legislatures, which are currently the only entities that put checks on the industry's influence and consider regulations that reduce harm. 

Jason Pye is the founder of Exiled Policy, a libertarian policy shop. He served as the vice president of legislative affairs for FreedomWorks until March 2021 and as a senior policy adviser for FreedomWorks from September 2023 to May 2024. 


Comment
Show comments Hide Comments