The FCC Shouldn't Regulate AI-Generated Political Speech
AP
X
Story Stream
recent articles

Artificial intelligence (AI) skeptics often predicate regulatory proposals on the unserious claim that no agency possesses effective authority to regulate the technology. Once scrutinized, these claims have withered. Recognizing this, technocrats across the government have rushed to deploy their power to mold AI development and use — even absent firm statutory authority. The cooks are piling into the kitchen, and the broth is in danger.

The Federal Communications Commission (FCC) late last month joined the regulatory fray, announcing a forthcoming disclosure regime for AI-generated political campaign advertisements. Precisely what legitimate interest the FCC — a communications regulator — has in involving itself in political speech remains unclear. 

As Federal Elections Commission (FEC) Chairman Sean Cooksey noted in a letter sent last week, “Nothing in [the Bipartisan Campaign Reform Act, which the FCC cites as its statutory authority] empowers the FCC to impose its own affirmative disclaimer requirements on political communications — a form of compelled speech — whether they are forced on the speakers or on the broadcasters,” Cooksey wrote. The chairman argued further that the FEC maintains sole regulatory authority over political disclaimers and noted that his agency has already embarked on proceedings to regulate AI-touched political speech.

Besides its facial lawlessness, the FCC’s misadventure typifies several dysfunctions common in today’s AI policy making. First, the agency confuses its policy goals, pursuing an reckless solution to a largely fictional problem. It improperly singles out AI-generated content as if the technology itself — rather than deceptive information, generally — requires regulation in the political-speech context. 

No cognizable reason exists — beyond personal interest, which does not spawn a compelling governmental interest — for voters to know whether AI created or edited non-deceptive political advertisements. AI outputs per se have nothing inherently deceptive about them and, therefore, regulators ought not to subject them to any special disclosure regimes. Other tools — video-editing software, animators, etc. — can produce misleading or false information just as well as AI (albeit with more effort). Deceptive ads must be dealt with irrespective of the technologies involved.

Imposing AI-specific regulation necessitates defining precisely which digital tools, as a legal matter, constitute AI. While artificial intelligence has imprinted strongly on the nation’s imagination and emotional consciousness, few Americans — including the political class — grasp the technological intricacies. 

In the broadest sense, AI includes technologies ranging from spellcheck to the supercomputer whose impending bid for world domination President Joe Biden fears. As the FCC acknowledged, some sort of definitional fine-tuning — and narrowing — must occur. Particularly in the digital world, where technologies mutate rapidly and defy neat classification, taxonomical challenges will most certainly arise. As Commissioner Carr noted, any attempts at definition will likely end with “Lawyers…telling their clients to just go ahead and slap a prophylactic, government-mandated disclosure on all political ads going forward just to avoid liability.”

To be sure, lawmakers cannot avoid such problems entirely. Some degree of imprecision (and the resulting confusion for the regulated) inhabits every statute and rulemaking — even good and necessary ones. This fact should encourage policy makers to avoid unnecessary regulation, especially when proponents of that regulation base their arguments on largely non-existent harms, or where better solutions exist. Despite much worrying, AI-facilitated misinformation — and technologically facilitated distortions of reality generally — have yet significantly to disrupt American political life.

The FCC, specifically, with its statutory limitations, lacks the jurisdictional reach to effectively regulate AI-generated political speech. As Carr asked, should the agency promulgate its rule, will “AI-generated political ads that run on broadcast TV…come with a government-mandated disclaimer but the exact same or similar ad that runs on a streaming service or social media site will not?” Consumers’ entirely defensible ignorance of the structure of the administrative state will likely cause them to “conclude that the absence of a government warning on an online ad means that the content must be real.”

As they proceed, both the FCC and the FEC should remember that purportedly neutral disclosure regimes all too often disincentivize the use of whatever is being regulated. In the case of political advertising, this means disincentivizing an entire category of speech — AI-generated speech. It means disincentivizing the proliferation of a technology that, by lowering production costs, will benefit smaller and less-well-funded campaigns. By inflating the price of producing political speech — or, more precisely, by preventing it from falling — discouraging the use of AI can only benefit the well-funded and well-connected.

Such regulation, particularly when it responds to no clear and pressing problem, should be anathema.

David B. McGarry is a policy analyst at the Taxpayers Protection Alliance and a social mobility fellow at Young Voices. His work has appeared in publications including The Hill, Reason, National Review, and the American Institute for Economic Research. @davidbmcgarry


Comment
Show comments Hide Comments