Twitter, Facebook, and Some Inconvenient Truths
Section 230 and cries of anti-conservative bias online are in the news again. In the immortal words of Yogi Berra, it’s déjà vu all over again. Only this time, things are a little different.
Earlier this week, Twitter and Facebook took actions that prevented users from sharing a New York Post article containing allegations about Hunter Biden that are potentially damaging to the elder Biden’s presidential campaign. For its part, Facebook limited the story’s exposure on users’ News Feeds, subject to a “fact check.” Twitter, meanwhile, labeled links to the story as “potentially unsafe,” and then subsequently blocked users from sharing it altogether. Twitter even went as far as to lock the accounts of users who had previously shared the story, including those of the Trump campaign, the White House press secretary, and even some journalists.
At one time, there were, broadly speaking, two types of websites housing user-uploaded content. The first—think Craigslist or Reddit—allow nearly unlimited uploads of user content, with restrictions typically reserved only for content that is clearly illegal or otherwise reprehensible. The second type allows very limited uploads of user content, which are carefully reviewed and curated by the website prior to being hosted. An example of this might be a real estate broker’s website, with listings carefully curated by a discerning editor.
But increasingly, these two types are blending together. Consequently, we’re beginning to see the rise of a third type of user content-driven website. This third type claims to allow users to upload a wide variety of content. But it will bury, block, or outright ban user content deemed unsafe or untruthful. And far too often, these acts of censorship are taken in haphazard manners that, unintentionally or otherwise, reek of political favoritism.
In the case of yesterday’s New York Post article, Twitter claimed it needed to restrict users from sharing the article because it violated Twitter’s rules prohibiting sharing individuals’ email addresses and phone numbers. It also claims that the article violates Twitter’s Hacked Materials Policy, due to the emails in the article potentially being obtained by Russian hackers.
The article in question may have indeed violated these rules. But the truth is that these rules are rarely and selectively applied. Just ask Federal Communications Commission Chairman Ajit Pai, who was publicly doxxed and harassed in 2017 following his attempts to roll back Obama-era Internet regulations. A simple Twitter search today reveals tweets from that time claiming to contain his home address. How does this not violate Twitter’s rules?
Consider another recent example. Twitter did nothing to rein in users from sharing various New York Times articles published last month detailing years of the president’s confidential tax returns. Surely this highly personal information was hacked or otherwise obtained illicitly. And lest we not forget the confidential emails of John Podesta and other Democratic operatives, infamously stolen by Russian hackers and widely disseminated on social media in 2016.
Of course, it would be impossible for Twitter to perfectly apply its rules. Twitter users send roughly 500 million tweets per day. Surely many of those tweets contain personal information like email addresses and phone numbers, yet are not blocked. So it’s only fair to ask—why were Twitter’s rules enforced in only one of these many high-profile instances?
In a tweet from his personal account yesterday, Twitter CEO Jack Dorsey said that “blocking URL sharing via tweet or DM with zero context as to why we’re blocking [was] unacceptable.” Actually, Mr. Dorsey is mistaken. He would be wise to follow an old adage: it is better to remain silent and be thought to be a fool than to speak and remove all doubt.
Facebook is also becoming a new, third type of user content-driven website—one that, like Twitter, is far too often marred by selective application of shifting rules resulting in predictable, politically-tinged consequences. Don’t just take our word for it. Andy Stone, a senior executive at Facebook, tweeted yesterday that the company was “reducing [the New York Post article’s] distribution” on Facebook, pending a “fact check.” This was the only proffered rationale for Facebook’s suppression of the New York Post article. Instead of violating an internal rule, a Facebook executive seemingly personally plucked a politically potent article for “fact checking.”
Facebook is even more widely used than Twitter. In 2019, Facebook had 2.5 billion active monthly users, with users uploading 350 million photos every day. Of the hundreds of millions of files and posts uploaded daily, few if any of these are subject to “fact checking.”
Rather than actually involving a search for truth, “fact checking” is far too often a 21st century invention used to accuse political figures of lying. Publications that a decade or two ago would have sneered at the idea now have entire departments filled with dedicated “fact checks.”
To what end does this “fact checking” exist? Of the hundreds of millions of daily uploads on Facebook, many filled with false information, few if any are tortured by “fact checkers.” If Facebook’s “fact checkers” find the New York Post’s article to be truthful, will Facebook allow the articles to be widely shared? Or will a shadowy truth board hold the article hostage and refuse to release it until a later date, when readers forget about the story altogether? Perhaps that’s the point. In the meantime, the Streisand effect remains in full force.
As for Section 230—by their actions, both Twitter and Facebook likely have lost broad liability protections under the law. Under Section 230(c)(1), websites shall not be treated as the “publisher or speaker of any information provided by another information content provider.” And under Section 230(c)(2), websites avoid liability for efforts to “restrict access to or availability of . . . objectionable” content. But websites lose that safe harbor when they go beyond merely restricting access, and instead publicly editorialize against user content. At that point, the website is no longer a provider of access to information; it instead becomes a speaker itself.
By publicly criticizing and editorializing against the information disclosed in and the provenance of the New York Post’s article, Twitter has engaged in its own form of original speech which is not sheltered in the safe harbor of Section 230. And by similarly publicly questioning the truthfulness of the article, Facebook likely also acted beyond Section 230’s safe harbor.
Nevertheless, Section 230 remains contentiously debated. President Trump has repeatedly voiced his opposition to the law, as have former Vice President Biden and countless senators. Additionally, the Federal Communications Commission has decided to conduct a rulemaking to “clarify the meaning” of the law.
To be clear, Section 230 remains a vital component of the user content-driven Internet we all enjoy today. And the best solution to online censorship is more, not less, speech—to let markets, not government, solve the political censorship problem.