Market Solutions for Social Media Content Moderation

Note: This post and all affiliated content are part of AEI’s Online Speech Project. To learn more about the project, click here.

Content moderation on social media sparks controversy. Arguments for greater restrictions or, conversely, more user liberty are often framed as matters of law, censorship of disfavored ideas, politics, or management of a public square. The social media platforms themselves sometimes approach it as issues of governance, fact-checking, or community response.

These approaches are largely top down; that is, a platform or the government designates someone to be in control. What’s missing are content moderation approaches that leverage economic incentives. Economic incentives have been the dominant force managing content for centuries and could be put to greater use in social media today.

via Adobe open commons

How did we get here? Early online content moderation often led to lawsuits, which were costly and created uncertainty. These led former Rep. Chris Cox (R-CA) and then-Rep. (now Sen.) Ron Wyden (D-OR) to draft Section 230 of the Communications Decency Act of 1996, which provides legal protection for platforms.

But the law did not end the controversies. Lawsuits continue, focusing largely on the interpretation of Section 230. The Electronic Frontier Foundation identifies 25 key legal cases on Section 230—almost one per year—and explains that these are but a small sample of the numerous lawsuits that have ensued over the past 26 years.

Perceived content bias and questionable expertise drive some of the conflicts. Pew Research Center found in 2020 that 73 percent of US adults believe social media sites “intentionally censor political viewpoints that they find objectionable.” Only 14 percent of Republicans are confident that platforms can accurately label content as inaccurate, whereas 46 percent of Democrats are confident. Examples of potential bias include Meta’s platforms siding with President Joe Biden on the economic definition of recession, and my AEI colleague Bret Swanson points out that tech platforms have gone so far as to call balls and strikes on medical controversies.

Before the internet, content creation and distribution were costly, which limited the number of voices that would be heard while also regulating their substance. Readers’ preferences determined book publishers’ choices of authors and titles. Viewers’ willingness to give time and attention drove broadcasters’ picks for news stories, commentaries, and speakers. And readers’ willingness to buy affected newspaper editors’ decisions regarding stories, reporters, and layouts. As historian John Sommerville pointed out, the broadcasters and editors also styled their content to ensure daily news consumption.

In each legacy technology, the economic laws of supply and demand played leading roles in moderating content: The content that was distributed widely had to appeal to large audiences.

The internet as we know it today transformed the economics of content. Web 2.0 facilitates user-generated content by making content production cost little more than the creator’s time. And social media’s zero-monetary-price-for-users business model—borrowed from broadcast media and some newspapers—made time the primary cost for viewers. Of course, there are consequences of using social media that fall below users’ awareness thresholds: Some platforms and some content providers have followed legacy media’s lead and incorporated features that encourage more and more consumption.

The low monetary costs to supply and consume mean there are more speakers and greater varieties of content under Web 2.0 than when legacy media models ruled. But the massive jump in volume increased users’ costs of choosing, so platforms developed algorithms that in effect make choices for users. But these algorithms are imperfect because of the inherent limits to their artificial intelligence, the data they use reflects imperfect decision-making, the sheer volume of posts is hard to manage, and there is a broad diversity of user content-moderation preferences.

Platforms can profitably change these economics for Web 2.0 and the metaverse by changing platform design and pricing. Regarding design, rather than effectively placing billions of users in a single room, platforms can promote like-minded people creating their own spaces that form their own rules. Meta’s Horizon World’s personal space steps in this direction.

Platforms can create new pricing models that align creator and reader interests. Researchers from the University of Pennsylvania recently provided insights toward this end. They explain that the importance of platform content moderation depends on the composition of users, with moderation rising in importance if users are more interested in consuming than creating. With this insight, the researchers find that content moderation with subscription fees results in less extreme content than if the platform relies on only advertising revenue. Indeed, they conclude that subscription fees decrease the need for content moderation and incentivize platforms to invest in better technologies. On the other side of the market, charging creators can also lead to platforms with higher concentrations of valuable content.

In short, platforms will find content moderation less controversial and less burdensome if they diversify their platform design and pricing strategies. Incentives are powerful. They’ve created the America we enjoy today, and they can help shape that of tomorrow too—if we let them.

The post Market Solutions for Social Media Content Moderation appeared first on American Enterprise Institute – AEI.