Section 230 Goes to the Supreme Court

Note: This post and all affiliated content are part of AEI’s Online Speech Project. To learn more about the project, click here.

Section 230 is the backbone of internet law. It is also the subject of increasing criticism from politicians on both sides of the political aisle. But perhaps surprisingly, during its quarter-century history, this landmark statute has never faced Supreme Court scrutiny.

Until now.

Earlier this month, the Court agreed to hear Gonzalez v. Google. Operating at the intersection of internet law and anti-terrorism efforts, this case raises important questions about the scope of Section 230’s all-important immunity for companies that host user content online. The Court’s decision is likely to have a significant impact on the internet ecosystem.

The Supreme Court via Reuters

At issue is a suit against Google by the families of Nohemi Gonzalez, a US citizen killed by a 2015 ISIS terrorist attack in Paris. Plaintiffs allege that ISIS posted videos on YouTube seeking to incite violence and recruit potential supporters and that Google aided these attacks by algorithmically recommending these videos to users worldwide, thus allowing ISIS to enlist recruits and carry out operations beyond its base in Iraq and Syria. The Ninth Circuit dismissed this claim, holding that Section 230 foreclosed this theory of intermediary liability.

The case presents a core question about the scope of Section 230’s immunity provision. Under the statute, a platform or its users shall not be treated as the speaker or publisher of information provided by another content provider. This means that while those posting ISIS videos can be sued for their content, YouTube cannot be sued merely for hosting ISIS’s message. Plaintiffs seek to skirt this immunity by focusing instead on YouTube’s decision to recommend ISIS content to specific users.

Google builds user profiles by recording online behavior, then uses that data to recommend particular content (through an automated process) to users whose profiles suggest they would be receptive to that message. Plaintiffs claim these targeted recommendations are beyond the scope of Section 230. They are suing Google not for being ISIS’s publisher but its promoter and recruiter.

While the plaintiffs’ argument has some initial appeal, my sense is that it should fail. Whether described as “publishing” or “recommending,” ultimately the suit seeks to hold Google liable for sharing user content with other users, which is the core of Section 230’s prohibition. Plaintiffs concede that Section 230 protects platforms’ performance of “traditional editorial functions,” such as whether to display or withdraw content. But to use a newspaper analogy, publishing includes more than the binary question of whether or not to include a particular article. Editorial functions also include decisions regarding which content to feature on the front page, which to bury on the back page, and how much space to give each story—in other words, how the publication is presented to consumers.

Plaintiffs are hard-pressed to explain where publication ends and recommendation begins. The choice not to block content is, in effect, a decision to recommend it to users. And because “algorithm” is just a fancy word for “computer program,” even a protocol that simply lists all platform content chronologically is an algorithmically determined recommendation to users. Notably, Section 230 protects not only platforms but also other users. If the statute does not insulate recommendations, one might ask whether individual Twitter users risk liability with every retweet.

The plaintiffs’ theory of liability also represents bad policy. I’ve discussed before that while the internet reduces information costs, the trade-off is increased filtering costs—the cost of sorting this abundance of information to find the content you desire. We should encourage, not discourage, companies like Google to experiment with new and better ways to reduce these filtering costs through algorithms, to get consumers what they want more efficiently.

But it is unclear whether these arguments will carry the day. Three conservative justices have suggested that lower courts may be interpreting Section 230 more broadly than the text may bear—a sentiment with which I agree, at least in part. And the Facebook whistleblower’s allegations cast an unfavorable light on algorithmic recommendations, as evidenced by the numerous pending bills to regulate the practice.

Two years ago, Justice Clarence Thomas wrote, in a case denying Supreme Court review, that “in an appropriate case, we should consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by internet platforms.” It appears that at least three of his colleagues have agreed with him that this is indeed that case.

Section 230 has generated a lot of heat and light over the past few years. But I fear the Court’s Gonzalez v. Google decision may spark a full-blown conflagration in internet law.

The post Section 230 Goes to the Supreme Court appeared first on American Enterprise Institute – AEI.