Properly Interpreting the Scope of Section 230

By Daniel Lyons

Note: This post and all affiliated content are part of AEI’s Online Speech Project. To learn more about the project, click here.

Two years ago, Justice Clarence Thomas kicked a hornet’s nest in internet law. In a statement concerning denial of certiorari in a pending case, he suggested that courts have almost unanimously misinterpreted Section 230, the landmark statute on which much of the current internet ecosystem rests. Although having no legal significance, this opinion lent judicial credence to efforts on the right to rein in the statute’s key protections.

I have mostly criticized the Republican fight against Big Tech. But I agree in part with Justice Thomas that courts’ focus on the “policy and purpose” of Section 230 has led to interpretations at odds with the statutory text. Given that Supreme Court review of the statute seems inevitable, it’s helpful to examine the validity of these arguments.

Supreme Court Justice Clarence Thomas at the Supreme Court building
Supreme Court Justice Clarence Thomas at the Supreme Court building in Washington, DC, June 1, 2017, via Reuters

Publisher Versus Distributor Liability

Section 230(c)(1) provides that no interactive computer service “shall be treated as the publisher or speaker of any information provided by another information content provider.” Congress adopted the statute to correct a New York decision holding Prodigy liable for a defamatory comment that a user posted on the company’s site. In the offline context, publishers that choose to print a speaker’s words can be sued for defamation just like the speaker can. But given the large amount of content online, this rule would place an insurmountable burden on companies to monitor all user speech. So Section 230 provided intermediary immunity for online providers—thus allowing Twitter, for example, to grant millions of users wide freedom to speak, safe in the knowledge that it will not be liable for user misconduct.

Justice Thomas criticizes Zeran v. America Online, an early case testing the limit of publisher immunity. The plaintiff, Kenneth Zeran sued America Online (AOL) for failing to remove defamatory content, even after he informed the company of its existence. Zeran argued he sued AOL not as a publisher but a distributor. At common law, distributors are liable for a speaker’s defamation, but only if they actually know the speech is defamatory. Citing Congress’s intent to protect “freedom of speech in the new and burgeoning Internet medium,” the court interpreted “publisher” broadly to encompass both publisher and distributor liability. Thomas argues this is improper and that the statute left common law distributor liability intact. But Zeran offered a reasonable explanation as a matter of statutory interpretation. Moreover, while legislative history is suspect, the bill’s author has explained that in his view, the statute was meant to provide comprehensive relief from liability for hosting user content.

Moreover, Zeran’s interpretation is better policy. Allowing social media companies to remain liable under a distribution theory would leave them vulnerable anytime the plaintiff can show actual knowledge. This would likely encourage “heckler’s vetoes,” wherein a social media company that receives a complaint about a particular post would simply delete the post rather than investigate, to avoid the costs of investigation and minimize litigation risk of an erroneous decision to leave the post up. Ironically, narrowing publisher immunity is likely to lead to more censorship, not less.

Hosting Versus Removing Content

But in another context, Thomas is quite right that some courts have improperly expanded Section (c)(1)’s protections. In Barnes v. Yahoo, for example, the Ninth Circuit suggested that (c)(1) “shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties.” This is a potentially important flaw in the judiciary’s expansive reading of Section 230. Section (c)(1) deals with the consequences of hosting user content; on its face it does not mention removal of user content. Section (c)(2) does provide a defense for removal of user content, but only in specific circumstances: to wit, when the provider has determined “in good faith” that the material in question is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

Section (c)(2) raises a host of interesting statutory interpretation questions. For example, does “otherwise objectionable” allow the interactive computer service to remove anything it finds subjectively problematic, or is “objectionable” limited to the type of material enumerated in the preceding list? What does it mean to act “in good faith,” and does that mean a user can challenge a Section 230(c)(2) defense by arguing ulterior motives? These are important questions, but they have few answers, partly because companies can claim blanket immunity for removal as an “editorial decision” protected under (c)(1) without having to reach the limiting language of (c)(2).

I have little doubt that the Supreme Court will eventually accept Thomas’s invitation, “in an appropriate case, [to] consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by Internet platforms.” With some limited exceptions, I hope the Court finds that it does—and that it preserves the integrity of this vital component of our modern internet experience.

The post Properly Interpreting the Scope of Section 230 appeared first on American Enterprise Institute – AEI.