Thinking Clearly About Speaking Freely: Suggestions for Facebook and Twitter
(Foto AP/Eric Risberg)
Thinking Clearly About Speaking Freely: Suggestions for Facebook and Twitter
(Foto AP/Eric Risberg)
X
Story Stream
recent articles

In this "Thinking Clearly About Speaking Freely" series which I began in April 2021, I have been exploring possible measures that address the extent to which Big Tech platforms have been overly censorious in removing or restricting content that, in my view, ought to be within the realm of legitimate public debate. To me, the excessively censorious actions, at least regarding matters of public significance like the Hunter Biden laptop story, the COVID origin story, or border control policies, tilt decidedly against speech from the right side of the political and philosophical spectrum. 

But for my purposes here, it makes no difference if you agree with me that the major web platforms have acted in ways that disfavor speech from the right side of the political and philosophical spectrum. That’s because my objective is not to get the Big Tech platforms to favor speech from the Right or the Left, but rather to get them to be generally more free speech-friendly regarding matters of public importance, especially the platforms that profess an intent to operate, to the extent possible, as “public squares” and “free speech zones.” 

And to be clear on these points too: I am not advocating that government dictate moderation policies for the Big Tech platforms or dictate specific moderation decisions. As I explained in Parts 6 and here in Part 7, “as private entities, their moderation decisions generally are protected by the First Amendment, with only a few exceptions, say, for example, if they willingly coordinate speech-suppressive actions with the government or accede to government directions.” And, as I said in Part 3, I don’t advocate, at least at this point, that the web giants be required to operate as common carriers, that is, with the obligation to carry indiscriminately all lawful posted speech – even if such compulsion were legally permissible. 

What I do advocate in this Part 8 of the series is that platforms, at least those like Twitter and Facebook that have proclaimed that they wish, in the main, to be public squares promoting free speech, should incorporate into their “terms of service” express provisions establishing a presumption that content will not be removed or otherwise restricted absent clear and convincing evidence that the speech violates some specific, clearly delineated content prohibition. And as an integral part of this presumptive “free speech default,” the terms of service should set forth procedures that allow for prompt escalation and supervisory review of initial “take down” decisions. [NOTE: Here I am not addressing platforms that claim they will not take down any content that is considered protected speech under current First Amendment jurisprudence.] 

The terms of service of the major platforms like Twitter, YouTube, and Facebook all contain the familiar provisions specifying, in similar language, types of content considered “harmful” that may lead to restrictive actions, content constituting “harassment,” “abuse,” “threats of violence,” “hateful conduct,” “sexual content,” and the like. And specification of these various types of verboten speech is commonly prefaced by phrases such as “we will remove,” “we will not tolerate,” or “you may not.” Given the ambiguities inherent in the use of language and given that most platforms also acknowledge that context may matter, it is not surprising, or necessarily objectionable, that the platforms’ “speech rules” contain language such as that above. 

I understand the Internet platforms have constructed, and constantly revise, elaborate algorithms intended to determine, at least initially, whether posted content violates the prohibited categories of harmful speech. Those algorithms, no matter how sophisticated, are necessarily somewhat crude instruments for implementing censorship decisions. They are created by humans (with their own biases) and implemented and reviewed by humans (with their own biases). For present purposes, I am willing to assume that these algorithms are created and applied in a way that does not intentionally tilt towards favoring or disfavoring speech that may be categorized as falling into one of the prohibited categories. 

But that’s precisely the problem for a platform, say, Twitter if Elon Musk gains control, that wishes to operate, in the main, as a free speech zone. To accomplish this, the platform needs to adopt an explicit policy that presumes that content will not be taken down or restricted absent clear and convincing evidence that the speech violates some specific, clearly delineated content prohibition. With that express top-level policy in place, procedures can be implemented that will provide, along with a fair opportunity for the presentation of arguments by the poster, for prompt escalation and review of challenged restrictive decisions, say, at least up through two or three supervisory levels. 

To be sure, as long as humans are implementing the terms of service, there is the possibility, even likelihood, that political or philosophical dispositions will affect their decision-making. The possibility of biases tilting decision-making remains. But with a top-level “free speech default” policy in place, it will be more difficult for biases to operate in a way that ultimately affects censorship decisions. 

Finally, to be clear, nothing I have said here suggests that the Internet platforms, if they wish, should not be able to act quickly to take down content, even if lawful, that incites violence, constitutes vile harassment, or the like. I am concerned – as I think you should be – with speech restrictions regarding matters of public significance that almost always can be readily differentiated from cognizable harmful speech. 

And when content cannot be “readily differentiated” from cognizable harmful speech by the platforms’ content decision-makers, that’s exactly when the free speech default presumption ought to be outcome-determinative so that all of us may be allowed to speak more freely. 

Randolph May is President of the Free State Foundation, a free market-oriented think tank in Rockville, MD.


Comment
Show comments Hide Comments