5 questions for Jeff Kosseff on Section 230 and online content moderation

Section 230 of the Communications Decency Act has become a
focal point of controversy among those who are concerned that social media
companies are too biased in their content moderation practices. So I recently
spoke with Jeff Kosseff in order to clear up what Section 230 actually says and
how it should be regarded 25 years later.

Jeff is an assistant professor of cybersecurity law in the US Naval Academy’s cyber science department. He is also the author of a 2019 book on Section 230, The Twenty-Six Words That Created the Internet.

Below is an abbreviated transcript of our conversation.
You can read our full discussion here. You can also subscribe to my podcast on Apple Podcasts or Stitcher, or download the podcast on Ricochet.

Do internet
companies need to be politically neutral to receive Section 230 protection?

No. Section 230 was intended to clarify that the
government would not impose liability on internet companies even if they
moderated their content, and this exemption is not based on any requirements
for neutrality, whether political or otherwise. In fact, this lack of any
neutrality requirement actually gives the platforms the flexibility that they
need to do what they think best serves their users. Section 230 is very much a
market-based law, in that if a platform really messes up its moderation — at
least in theory — the users will seek other competitors. And if a platform does
a really good job at moderating what its users don’t like, that will satisfy
them and they’ll want to continue with the platform. That’s basically the
theory that Section 230 operates under.

So there’s no requirement of neutrality. And in fact, even
if there was a requirement of neutrality, there would be some significant First
Amendment issues associated with that.

There’s a
phrase in the law about “good faith.” If these companies are
politically biased in their moderation efforts, are they failing to engage in a
“good faith effort” as outlined, and therefore in violation of
Section 230?

We don’t have much case law determining what “good faith”
means, but it’s often irrelevant to content moderation concerns. Here’s why: Section
230 has two main provisions. The first portion says interactive computer
service providers are not treated as the publishers of content that’s provided
by someone else, and this does not
have a good faith requirement. It’s the second provision that says if you take good
faith efforts to block access to various things, you’re also not liable for
that.

In cases involving content moderation, when it has come
down to a Section 230 issue, courts often will resolve it under the first
provision of the law, which does not have a good faith requirement. And for
many other moderation-related cases, Section 230 doesn’t even come into play.
Often, it’s the First Amendment that matters instead, and the courts have
repeatedly said that we do not have First Amendment claims against private
companies when they restrict our speech — it’s a longstanding legal concept
known as the “state action doctrine.” In both cases, the “good faith”
provision is not relevant to court decisions about content moderation.

I don’t want to minimize people’s concerns. We have a few
large companies controlling significant avenues for speech, and their power over
people’s livelihoods is greater now than it was in 1996. It’s a very real
problem. But I don’t think changing Section 230 is going to address the
concerns that people are raising. If anything, changes to Section 230 will
actually make it more difficult for people to get their viewpoints across.

Via REUTERS/Dado Ruvic/Illustration

What would
happen if we did reform or repeal Section 230?

I don’t think the changes that I’ve seen proposed would
make social media companies less likely to moderate content. Particularly if we
repealed Section 230, the platforms would suddenly face significantly increased
liability. And I don’t think the reaction of platforms to this would be,
“Well, let’s start allowing more
controversial speech.” Instead, you’d get a locked-down, boring internet,
because social media sites would be very worried about liability.

Also, if you’re concerned about Twitter and Facebook
having power, this would likely be worse without Section 230. I’m pretty
certain that Facebook and Twitter are going to survive whatever new legal
standard emerges, because they’re big, they have very large Washington DC
staffs that can influence how things go, and they have enough money to be able
to implement whatever moderation practices or technologies are necessary to
meet the new post-230 standards. Meanwhile, the smaller companies competing
with them might not be able to afford those standards. So I’m concerned that changing
or getting rid of Section 230 could lead to even more consolidation of venues
for opinion.

Some critics
have said that these companies need to be more transparent about their content
moderation practices. Is this a fair criticism?

Yes, I think it is. Before 2016 or 2017, these companies operated
so secretly that you really had very little insight into how and why they made
their decisions. Now, they’ve recently become far more transparent in terms of
explaining their processes and having more detailed policies, and that’s a
great thing, because you’re never going to satisfy everyone with content
moderation. When you get to difficult decisions regarding heated political
discussions or potential disinformation, it’s very useful for the platforms to clearly
explain what their standards are. You might not agree with the standards, and
that’s fine, but it’s good to at least have an explanation for why they took
the action that they did.

Now, that’s hard to do. Take Twitter, which has thousands
of tweets per second. Even if you’re only taking action on a fraction of those,
there’s going to be a lot of different contexts in which you’re making your
decisions, and it might be difficult to satisfactorily explain all of them. But
to at least give an idea of what your framework is in making those decisions is
a big improvement.

Are there any
substantial changes you would like to see which would make Section 230 a better
law?

There are a few things, but here’s the most important:
Let’s say that you posted something defamatory about me on Facebook. Section
230 would prohibit me from successfully suing Facebook for what you posted even
if I complained to them and they didn’t take it down. I could still sue you,
but even if I got a court order that said it was defamatory, I couldn’t use that
order to force Facebook to take the material down. There’s just no rational
explanation for this.

This is important for the individual plaintiffs, including people who have had horrific things written about them that are ruining their lives. They can’t get platforms to take that stuff down, and my concern is giving them a mechanism to do so.

The post 5 questions for Jeff Kosseff on Section 230 and online content moderation appeared first on American Enterprise Institute – AEI.