Proposals to regulate algorithms overreach significantly

By Daniel Lyons

The Facebook whistleblower’s allegations have highlighted the role that certain
algorithmic functions play in promoting socially undesirable content online.
House Democrats have responded with several bills that would strip social media platforms of the
protection afforded by Section 230 of the Communications Decency Act for content
promoted via algorithm. But “algorithm” is just a fancy word for a computer
program — and punishing the use of algorithms is likely to do more harm than
good for consumers.

Birds fly near the U.S. Capitol days after Democratic leaders of the U.S. House of Representatives delayed a planned vote on a $1 trillion bipartisan infrastructure bill, in Washington, U.S., October 4, 2021. REUTERS/Kevin Lamarque
via Reuters

Perils of Section 230
reform

As we’ve discussed before, Section 230 is the backbone of American internet
law. Because of Section 230, companies such as Facebook and Twitter could
establish themselves as platforms where users could easily share content with
one another without worrying about whether users’ speech would get them into
trouble. Section 230 created the modern internet by making room for these
companies to operate — not just social media platforms, but any company that
connects users with each other and facilitates the exchange of ideas online.

We tinker with this regime at our peril. Section 230 is
woven into the fabric of online society, making it difficult to predict how a
change to the statute would ripple throughout the internet ecosystem. Congress’
earlier foray into Section 230 reform highlighted this risk of unintended
consequences. In 2017, Congress passed the Allow States and Victims to Fight Online Sex Trafficking Act
(FOSTA), which stripped Section 230 protection for certain claims related to
sex trafficking. But the
first comprehensive FOSTA study
concludes that “the threat of an expansive
reading of these amendments has had a chilling effect on free speech, has
created dangerous working conditions for sex-workers, and has made it more
difficult for police to find trafficked individuals.”

Difficulty of
regulating algorithms

The current attempt to regulate algorithms risks similar
unintended consequences. This is because algorithms have ambiguous effects;
they can promote socially undesirable content online, but they also promote
millions of socially beneficial connections every day. Speakers and listeners
alike benefit significantly from companies’ use of personalized algorithms to
organize and curate user-generated content. It would be a mistake to eliminate
those benefits because of the risk of abuse.

For most of human history, information costs were a
significant barrier to education and enlightenment. Knowledge existed in
particular locations, and it cost significant time and money to find, acquire,
and consume that information. The genius of the internet is the reduction of
these information costs to nearly zero: A few clicks grant access to a treasure
trove of information, which can be transported across the planet
instantaneously at a small cost. The challenge of the internet age is therefore
not information costs, but filtering costs — namely, how to sort through this
abundance of information to find the content you desire.

Algorithms are the tools by which platforms provide this
service to their users. In an increasingly diverse society, the personalization
of those services is increasingly important, since different users have
different preferences about the type of content they wish to find. As platforms
learn more about their users, they can serve individual users more of the
content that they desire and less of the content that they do not.

It is difficult to regulate algorithms in a way that
captures negative effects while preserving these positive effects. One bill,
for example, would strip Section 230 protection for information promoted via
personalized algorithm, defined as “any computational process, model, or
automated means of processing to rank, order, promote, recommend, amplify, or
similarly alter the delivery or display of information” by using “information
specific to an individual.” These are incredibly broad terms. Facing the risk
of losing the all-important Section 230 shield, platforms are likely to reduce
— quite significantly — their filtering and sorting services, even when those
services provide significant benefits to users. Whatever social gains we
achieve by reducing algorithmic promotion of undesirable content would be
dwarfed by the loss of the ability to personalize one’s feed and easily find
the content one desires. In effect, we would remove the “social” from “social
media.”

Congress seeks to punish so-called “malicious algorithms.” But these algorithms are
not intentionally promoting harmful conduct. Rather, they respond to demand by
promoting the content that people want to see. The real culprits are the
producers of socially undesirable content and the social factors that make it
so appealing to large numbers of end users. We learned from FOSTA that
tinkering with intermediary liability does not address the underlying drivers
of problematic behavior. It trains the legal system on deep pockets rather than
bad actors and distorts human behavior in unpredictable ways.

For more discussion of this issue, please read my recent testimony on this topic before the House Energy
and Commerce Committee’s Subcommittee on Communications and Technology.

The post Proposals to regulate algorithms overreach significantly appeared first on American Enterprise Institute – AEI.