The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

A Lack Of (Good) Faith III: Rethinking Section 230 For The 21st Century

Who would have thought that the current election season would have vaulted Section 230 of the Communications Decency Act into the forefront of the electoral conversation? Then again, social media platforms like Twitter and Facebook have brought intense scrutiny on themselves. As I recently began writing about the issues with Section 230 here and here, I have been dismayed by the intentional content manipulation engaged in by social media platforms under the guise of “fact checking” and curtailing “disinformation.” I would have never thought in my wildest dreams that I would witness the blatant exercise of editorial content manipulation by such platforms that claim to be the ostensible public square for free speech over the past few weeks. How so? How about the intentional refusal of Twitter to initially permit the posting of New York Post exposés related to alleged misdeeds by former Vice President Joe Biden’s son Hunter Biden, along with explosive evidence of potential profiteering off of the elder Biden’s position while vice president (and that is putting it mildly). Facebook also limited the spread of the same information. The result has been a maelstrom of criticism and divisiveness, and a chorus of calls to either repeal or reform Section 230.

First, let us dispel the notion that these platforms can’t take such actions on the content posted on their platforms — they can (and now frequently) do so. The issue is whether they should, and if they do, whether they should enjoy the immunity provided by Section 230 as a result. As confirmed by the United States Supreme Court in Manhattan Community Access Corporation v. Halleck, free speech protections afforded under the First Amendment guard against governmental abridgment of speech and do not apply to private platforms. That said, the actions of these platforms fly in the face of their own policies — Twitter’s own rules claim that its “purpose is to serve the public conversation,” while Facebook’s own Community Standards claim “to create a place for expression and give people a voice.” Public conversation and free expression, indeed.

Next, the calls to repeal Section 230 are simply misplaced — it is a knee-jerk reaction that ignores the fundamental need for a mechanism to protect free expression online. Section 230 rightly shields online service providers from civil liability for defamatory, tortious, and even illegal content that its users post onto the platform (such as in comments in forums or to news articles). A mechanism to permit reasonable, good-faith moderation of such content that is obscene, lewd, violent, etc. without fear of liability furthers such freedom of expression. The problem is that the statute was not only written broadly, but initially interpreted by the courts even more so in Zeran v. America Online, Incorporated, creating a string of jurisprudence that reads “too much immunity” into the statute as so aptly stated by Justice Clarence Thomas in his recent statement respecting denial of certiorari in Malwarebytes, Inc. v. Enigma Software Group USA, LLC. 

Some are correctly calling for Section 230 reform, but approaching it in different ways. For example, some seek to simply treat online service providers as publishers (as opposed to distributors), but this ignores the point of Section 230’s elimination of the distinction under Section 230(c)(1) and its necessary interoperation with the moderation protections afforded under Section 230(c)(2). Others have advocated for tinkering with the criteria for moderation of content, such as a longer laundry list of content that can be moderated, or replacing certain definitions in Section 230(c)(2) dealing with removal of content; however, these recommendations also miss the mark. First, any attempt at enacting a larger statutory laundry list applicable to all online platforms is a tall order (let alone one acceptable to the currently hyperpartisan Congress). Further, Section 230 protections have always excluded federal criminal law, and limiting the moderation to the current list as well as illegal content is helpful, but does not go far enough.

Personally, I remain an advocate of simple approaches where possible, and Thomas has the right idea. His plain reading of the statute is on point: “the statute suggests that if a company unknowingly leaves up illegal third-party content, it is protected from publisher liability by §230(c)(1); and if it takes down certain third-party content in good faith, it is protected by§230(c)(2)(A).” Amen. This reading leads to three significant and necessary conclusions that are the crux of rethinking Section 230 from my perspective:

  1. Content creators remain liable. It’s easy to miss the forest for the tress here, but we need to remember that Section 230 protects online service providers from being held liable for third-party content. Section 230 does nothing to prevent affected parties from being held accountable for their own actions online.
  2. If you insert your platform into the conversation, don’t expect immunity for it. The premise here is simple and flows from the above point — if you engage in the conversation, then you should expect to be held accountable for your actions. Section 230 immunity is a privilege — online service providers (including social media platforms like Twitter, Facebook, and YouTube) only enjoy this legally conferred benefit if they meet the criteria for it. By taking the steps to limit or otherwise prevent political content from circulating on their platforms, these providers are acting as publishers — they cannot expect to not be treated as a publisher when they act like one.
  3. If you moderate content in bad faith, you lose your immunity as well. Online service providers have the right to run their platforms as they see fit, but that doesn’t mean that they can do so in a manner inconsistent with their terms and policies. From my perspective, an online service provider that chooses to limit certain speech under its terms must enumerate objectionable content in its policies, moderate such content fairly, and do so consistently in good faith. For example, if Twitter doesn’t like certain content because it feels the information is uncorroborated, then it needs to take the same position on other uncorroborated content, period. That said, we need to be careful here — content moderation will never be perfect and we don’t want to second-guess such actions (or worse, have the services do so themselves to such as degree that it encourages them to overly moderate content).

From my perspective, online service providers should avoid the issue entirely and stick to their policies rather than play politics with their content. Quite frankly, I find the recent actions not only catastrophically ill-conceived, but just plain stupid. Congress expressly sought to encourage online platforms to offer a “forum for a true diversity of political discourse” based on the legislative history of Section 230. Bad faith content moderation flies in the face of this intent. Social media platforms need to act more like a referee in the game, not one of the players — inserting themselves into the conversation is not their place. Of course, they are free (and absolutely have the right) to do so; however, don’t expect to be shielded from liability in the process.


Tom Kulik is an Intellectual Property & Information Technology Partner at the Dallas-based law firm of Scheef & Stone, LLP. In private practice for over 20 years, Tom is a sought-after technology lawyer who uses his industry experience as a former computer systems engineer to creatively counsel and help his clients navigate the complexities of law and technology in their business. News outlets reach out to Tom for his insight, and he has been quoted by national media organizations. Get in touch with Tom on Twitter (@LegalIntangibls) or Facebook (www.facebook.com/technologylawyer), or contact him directly at tom.kulik@solidcounsel.com.