Throughout this nine-part "Thinking Clearly and Speaking Clearly" series, I have been addressing the extent to which Big Tech web platforms like Twitter, Facebook, and YouTube have been overly censorious in removing or restricting content that ought to remain within the realm of legitimate public debate. Despite my concerns, I have explained several times, for example here in Part 7, that “as private entities, their moderation decisions generally are protected by the First Amendment, with only a few exceptions, say, for example, if they willingly coordinate speech-suppressive actions with the government or accede to government directions.
Here I want to say more about how Internet platforms might forfeit the protection the First Amendment otherwise affords them to moderate content as they please if they willingly coordinate speech-suppressive actions with the government or comply with government censorship dictates.
As a matter of first principles, the First Amendment protects private individuals or private entities from government censorship, not from censorship by other private parties. But in certain instances, the actions of private parties can be considered "state action." That is, they may take on the mantle of the government if there is such a "close nexus" or "pervasive entwinement" between the government and the challenged action that, as the Supreme Court put it in Brentwood Academy v. Tennessee Secondary School Athletic Association, seemingly private actions "may be fairly treated as that of the State itself."
Or as the Supreme Court declared in an oft-cited formulation of the state action doctrine in Lugar v. Edmondson Oil Co., Inc., a "private party's joint participation with state officials [in violating a person's constitutional rights] is sufficient to characterize that party as a 'state actor.'"
Whether the actions of a private web platform – say, Twitter, Facebook, or YouTube – should be considered government action for purposes of the First Amendment depends on the facts and circumstances of each case, of course. But the tell-tale signs tilting towards a determination of "state action" are clear: whether a "close nexus" exists between the private and government actors or whether there is "pervasive entwinement" or "joint participation" between the two.
Now, let's take a look at the most recent information uncovered in the brouhaha surrounding the Department of Homeland Security's now-suspended "Disinformation Governance Board" – an Orwellian moniker if ever there was one. Thanks to a whistleblower, Senators Chuck Grassley and Josh Hawley obtained and published internal documents compiled by the Disinformation Board suggesting that it would coordinate a government response to social media posts regarding anything DHS designated as "disinformation," including content relating to "vaccines or the efficacy of masks" and "security of elections." In other words, subjects beyond the more narrowly defined disinformation targets that DHS Secretary Mayorkas identified when the story initially broke.
Perhaps most revealing – and damning – the documents indicate the Board planned to meet with two Twitter executives "to discuss operationalizing public-private partnerships between DHS and Twitter, as well as [to] inform Twitter executives about DHS work on [disinformation], including the creation of the Disinformation Governance Board and its analytic exchange." According to the whistleblower's allegations, the Biden administration may have selected Nina Jankowicz to the lead the Board precisely because of her preexisting connections to Twitter's executives.
"Operationalizing" public-private partnerships in the context of government actions intended to suppress certain content the government identifies as "disinformation" certainly raises red flags under the First Amendment jurisprudence finding private entities are engaged in state action based on "pervasive entwinement" or "joint participation" with the government.
Two more quick examples of government pressure on web platforms are problematic too.
White House National Climate Advisor Gina McCarthy recently demanded that social media companies censor posts that raise concerns about the potential costs of adopting broader renewable energy options. McCarthy declared, "[w]e need the tech companies to really jump in." According to McCarthy, a claim addressing the technical limitations of lithium-ion batteries might be considered "disinformation." Certainly, the potential costs associated with renewable energy should be within the realm of legitimate public debate, unsuited for simplistic "true" or "false" diktats.
In July 2021, White House Press Secretary Jen Psaki announced the Biden administration regularly was "flagging" COVID 19-related social media posts for Facebook to review for "misinformation."
There may be nothing problematic with government officials sharing their concerns regarding what they deem to be harmful "disinformation" or "misinformation" of one sort or another. Indeed, in certain narrowly circumscribed instances, say information relating to real national security threats, they may be obligated to do so. The problem arises when private entities cede their independent decision-making authority to the government by virtue of the close nexus or pervasive entwinement between the two.
It's understandable that Twitter, Facebook, Google, and others wish to continue asserting their First Amendment free speech rights to remove or restrict content on their platforms as they please. But in those instances when, either willingly or unwittingly, they put themselves in a position in which they become "state actors," based on their collaboration or participation with would-be government censors, they may well forfeit the First Amendment protection they relish claiming.
And deservedly so.