Section 230’s half-bad economics

The future of Section 230 of the Communications Decency
Act hangs in the balance. Some believe it is essential for interactive computer
services. (I’ll just call them platforms.) Others believe its time has passed.
My concern is that Section 230 has some bad economics.

via Reuters

The law generally protects platforms such as Twitter from being
held liable for things they largely do not control, such as what users tweet.
This protection has been important for making it financially viable for
platforms to permit something resembling public forums.

But Section 230 also protects platforms from liability for things they do control, such as how they moderate content and manage users. Some argue this simply provides statutory cover for a platform’s First Amendment right to freedom of speech. I’ll let the lawyers and courts fight that out. My concern is with the economic incentives. Protecting a business from liability for damage it does to others — without them being informed of and accepting the risks — distorts incentives.

What does Section 230 actually say?

The statutory language for Section 230(c)(1) says, “No provider or user of an
interactive computer service shall be treated as the publisher or speaker of
any information provided by another information content provider.” Courts
have said this
means a platform can be held liable for content only if it is at least
partially responsible for creating the offending content.

Section 230(c)(2) says,

No provider or user of an interactive computer service shall be held liable on account of:

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.

Courts have concluded that this means a platform will not be held
liable for filtering content or blocking users as long as it is done in good
faith and based on the platform’s community standards.

Recently, Supreme Court Justice Clarence Thomas weighed in on
how courts interpret Section 230. Among other things, he questioned courts
holding that Section 230 protects platforms from liability when selecting,
editing, featuring, or excluding certain submissions and when they have
defective rules that facilitate things such as human trafficking, terrorism,
harassment, and impersonation.

Where does Section 230 distort incentives?

Section 230(c)(2) appears to be problematic because it limits the
consequences for platforms resulting from some of their own actions. If it is
true that Section 230(c)(2) is little more than a procedurally efficient First Amendment, then
my argument here is moot. Credit to my AEI colleague Jim Harper for this
insight, but I think there is more to Section 230(c)(2).

One reason markets are effective in improving people’s lives is
that markets align consequences with actions. Using one of Adam Smith’s
examples,  baker serves the interest of a
buyer because the baker benefits if he or she does a good job supplying that
customer. Conversely, the baker receives no benefit if the customer deems the
good unworthy of purchase.

Section 230(c)(1) is consistent with this market principle because
it assigns liability for content to the producer of the content, not the
platform. But Section 230(c)(2), as currently understood by courts, conflicts
with this principle by protecting a platform from some — but not all — of the
consequences of selecting, editing, promoting, or excluding content or users.

Certainly, there are normal business incentives to perform content
moderation well. For example, Facebook has an incentive to provide content
moderation that attracts user attention away from Twitter. But there are
conditions under which non-platform businesses can be held liable for actions
that damage someone else’s reputation, ability to engage in civic matters, or
ability to conduct business. These potential liabilities incentivize these
non-platform businesses to internalize the potential damages and choose how
much they will spend to avoid the harms.

But Section 230(c)(2) softens these normal business incentives for
decisions that fall under Section 230. As a result, platforms are motivated by
other considerations, which might include yielding to political pressures or
favoring political allies more than other types of businesses would. These
other considerations lower platforms’ economic effectiveness. This lower
efficiency cascades throughout the economy as the platforms otherwise provide
fertile ground for small business development and effective advertising.

What should be done?

Thomas and Harper appear to support a
common law approach, which would modify the application of Section 230 without
changing its text. This may be a viable option.

Another option would be to change the statutory language to better
align liability incentives with deliberate actions. This is challenging for two
reasons. First, there may not be sufficient political alignment because
Democrats and Republicans generally disagree on what content moderation is
appropriate. A legislative solution might also be hard because it would be
difficult to balance platforms’ First Amendment rights with users’ rights to
speak, hear, and engage in platform-based commerce.

The post Section 230’s half-bad economics appeared first on American Enterprise Institute – AEI.