I'll stipulate at the outset that certain information found on the Internet properly may be designated "misinformation" or "disinformation," even though there are disagreements, of course, about the line-drawing inherent in applying such Rorschach inkblot-like labels. By now, though, there should be widespread agreement that many social media sites, especially including the ones run by Big Tech companies, all too often have erred on the side of censoring too much information by wrongfully invoking "misinformation" or "disinformation" claims.
Nevertheless, I don't want Congress to create a powerful new federal agency – a "Federal Digital Platform Commission" as a newly introduced Senate bill proposes – with regulatory authority to combat misinformation and disinformation, along with other supposed "harms," propagated by social media platforms. More on the troublesome Digital Platform Commission Act of 2023 below, but first some background.
In my "Thinking Clearly About Speaking Freely" series of essays, I've chronicled over the last two years many instances of excessive censorship by Big Tech platforms, ranging from Twitter and Facebook famously restricting tweets that linked to the New York Post's stories about Hunter Biden's laptop to Twitter, Facebook, and YouTube censoring posts regarding the origin of COVID and the efficacy of various treatment protocols. Often the conclusory justification offered for the censorship was merely that the content constituted harmful "misinformation" or "disinformation" – with little or no further explanation given.
It's easy to decry the excessive censorship that has occurred at Facebook, YouTube, Twitter, and other online sites – after all, our democracy is healthier when speech concededly within the realm of legitimate public debate is not squelched. But it is difficult, within the strictures of the First Amendment and sound public policy, to devise acceptable approaches to combat the silencing. After all, the social media companies are private companies. Absent government-compelled classification as traditional common carriers, which I do not presently favor, the First Amendment, with few exceptions, protects their right to choose the content they wish to allow on their sites.
I've previously suggested, along with others, that Congress should reexamine whether the near absolute immunity from liability that all Internet websites enjoy under Section 230 of the Communications Act for the content they carry should be curtailed. More than two decades after the broad immunity shield was adopted in 1996 to protect just-emerging websites, the Internet ecosystem is radically different. But so far Congress hasn't shown much interest in reexamining Section 230. Moreover, in last month's highly anticipated decision in Gonzalez v. Google (2023), despite considerable conjecture to the contrary, the Supreme Court declined to address the circumstances, if any, under which Section 230's immunity shield might be relaxed.
So, what now? Despite the concerns I've expressed repeatedly regarding misuse of "misinformation" claims to suppress online speech that should remain uncensored, as I said in May 2022 in Part 7 of my "Thinking Clearly" series, "I am far more concerned about the government arrogating to itself the power to weaponize assertions of misinformation to silence views that may not comport with the official government line.”
That remains true, and that's why I am troubled by the proposed Digital Platform Commission Act, introduced on May 18 by Senators Michael Bennet and Peter Welch to establish a new agency, a five-member Federal Digital Platform Commission. The declared purpose of the new agency would be to "regulate digital platforms consistent with the public interest." It would possess rulemaking authority to issue regulations intended to remedy a wide range of claimed societal harms attributable to Internet platforms. One of the targets is "disseminating disinformation.”
Tying regulatory authority to an amorphous "public interest" delegation of authority is an open invitation for administrative abuse by overzealous bureaucrats. The long history of the FCC's frequent regulatory overreach, relying on the Communications Act's public interest standard, is proof enough of this. After all, the FCC's infamous Fairness Doctrine, which required broadcasters to present, to the agency's satisfaction, all sides of controversial issues, rested on invocation of the agency's authority to regulate in the "public interest.”
Moreover, what constitutes "disinformation," which the Platform Act places in the regulatory crosshairs, is largely in the eye of the beholder – like the "public interest." Again, this is not to say that real disinformation does not exist online – just that the term is so amorphous that it's easily weaponized by unscrupulous government officials with ulterior, say, political motives. Another way of putting it is to say it's unwise to put government officials in charge of determining the bounds of information that ought to be available to the public.
As if the Digital Platform Commission is not worrisome enough, the Platform Act also would create a New Deal-sounding Code Council consisting of eighteen "expert" members tasked with developing "voluntary or enforceable behavioral codes, technical standards, or other policies for digital platforms." Unsurprisingly, expertise regarding "disinformation" is highlighted as a qualification for Code Council membership.
I continue to be troubled by the extent of censorship that takes place on social media sites, especially by Big Tech, under the rubric of "misinformation" or "disinformation." That said, I'm certain that creating a new federal agency like the proposed Federal Digital Platform Commission is not the right answer. Indeed, it would truly be a classic case of the "cure" being worse – far worse! – than the disease.