Twitter Should Moderate Communities, Not Content

Note: This post and all affiliated content are part of AEI’s Online Speech Project. To learn more about the project, click here.

With Elon Musk’s acquisition of Twitter now finalized, the issue of how he is going to run the company is fully joined. Musk announced that there would be no change to Twitter’s content policies until a “content moderation council” is convened. It’s a good time to consider the truly vast range of options available to Musk and Twitter. In my opinion, Twitter should de-emphasize content moderation in favor of community moderation.

In testimony to the Senate Judiciary Committee earlier this year, I discussed content moderation in the context of a proposal to force transparency on internet platforms. I noted how some users are relentless in their efforts to reverse engineer and then evade every content moderation policy and practice. Content moderation is similar to platform security; attackers are motivated, adaptive, and wily.

via Reuters

As I did in my testimony, my colleague Daniel Lyons recently discussed giving content moderation tools to users. That would be consistent with what makes the internet great: This “dumb” network pushes innovation and responsibility to the edges, which is smart. With tools in hand, users can array their motivations, adaptability, and wiles against social attackers. Importantly, this can weaken the bipartisan political demand for companies to “fix” content.

But users may not be able to overcome their fascination with conflict.

Twitter need not wait for users to mature on their own. Information that can tune the community’s zeitgeist are there to be harvested. Think in terms of reputation systems.

Credit reporting and Google’s PageRank algorithm are both reputation systems. They use information gathered from third parties to gauge a given resource’s reliability. In the case of credit reporting, records of reliable bill payments tell you how likely a person is to pay bills in the future.

In PageRank, the focus is on links. A hyperlink from one web page to another is a vote saying, “This resource is good.” Pages with more links pointing their way are probably better resources than pages with fewer. The process is recursive: If a web page that others recognize as a good resource points to another web page, that can be treated as a stronger up vote than the vote of a lower-ranked page.

The same type of process can animate Twitter’s curation of community. In general: Follows are votes. Retweets are votes. Replies are votes. And so on.

Those indicia are subject to gaming. But many others are harder to game, or they can’t be gamed at all. All of them can be fed into a reputation algorithm that rewards productive contribution and disincentivizes antisocial behavior.

How long has an account existed? How much does an account parrot others or itself? Does the account have the daily, weekly, and monthly participation cadence of a well-adjusted human? Has the account holder submitted (optional) proof of identity? Paid a fee? Does the account connect from a normal distribution of IP addresses? Does the account have a variegated (non-bubble) network of followers and followees? Does the account follow links? How long does the account take to write tweets or replies? Does the account abandon and delete tweets? Does the tenor or language of an account’s interactions follow healthy patterns? Does the account respond to randomized CAPTCHAs?

Some of these data are probably used now to help root out “fake” accounts. All these things and many more could be tested as dials that tune the community toward normalcy by increasing the public availability of “healthy” accounts.

There is no need for this approach to be a black box eliciting paranoid claims of shadow banning. Publish at least summary guidelines of what the algorithm prefers so that people can adjust their behavior accordingly. If an obsessive person must post judiciously to be heard, so be it. If a “fake” account must act like a healthy human to survive, what’s the difference?

If you think this style of moderation is impossible, consider an arena in which people actually earn money by defeating a reputation system. That’s Google. And Google has done this kind of moderation at scale. Ask Matt Cutts. Google keeps “search engine optimization” in check well enough to continue thriving as a business.

In opining here, I’ve said nothing about the substantive content of tweets. Other than universally disapproved gore, scatology, and so on, it’s a fool’s errand to try to moderate content based on content. The alternative is moderating communities—not in the censorious mode that addresses what people can say, but in a civilizing mode that addresses how people can act. Regardless of content, Twitter can give more voice to people whose behavior is moderate.

The post Twitter Should Moderate Communities, Not Content appeared first on American Enterprise Institute – AEI.