In defense of Section 230: My long-read Q&A with Jeff Kosseff

Section 230 of the Communications Decency Act has come
under fire from people on both the left and the right. Many Democrats fear that
the provision enables too much hate speech and disinformation, and a lot of
Republicans believe that it sanctions anti-conservative bias from social media
companies. President Trump in particular has been demanding a repeal of Section
230 for the last month, even going as far to veto the 2021 National Defense
Authorization Act over this issue. But a lot of the conversation surrounding
Section 230 seems to operate under a misunderstanding about what the law
actually says and does. So earlier this week, I spoke with Jeff Kosseff to
clarify the original intent and current effects of Section 230.

Jeff is an assistant professor of cybersecurity law in the US Naval Academy’s cyber science department. He is also the author of a 2019 book on Section 230, The Twenty-Six Words That Created the Internet.

What follows is a lightly edited transcript of our conversation, including brief portions that were cut from the original podcast. You can download the episode here, and don’t forget to subscribe to my podcast on Apple Podcasts or Stitcher. Tell your friends, leave a review.

What problem
was Section 230 trying to solve?

So, Section 230 was actually trying to solve two problems
at the same time. First, there was a concern that the existing First Amendment
and common law protections for distributors of the content that other people
produce — the way the courts had interpreted them — basically meant that
platforms would have a disincentive for moderating, because the way that the
rules worked, at least according to some courts, was that if you engaged in any
moderation, you increased your liability for all of the content on your
services. This was 1996, when people were very concerned about children being
able to access pornography, and Congress did not want to discourage the
platforms from moderating. So, one thing that Section 230 was intended to do
was to clarify that we’re not going to impose liability because you’re moderating.
In fact, we wanted to encourage you to moderate.

As for the second goal of Section 230, remember that this
era was really the dawn of the commercial internet. And Congress wanted to let
the internet flourish with a minimal amount of regulation and litigation. So,
those were really the twin goals of Section 230.

We hear this
claim all the time: Do internet companies need to be neutral in some fashion —
i.e. politically neutral — to receive that Section 230 protection?

No, Section 230 does not have any requirements for
neutrality, whether political or otherwise. When you really look at how Section
230 operates, it’s actually intended to not impose a neutrality requirement to
give the platforms the flexibility that they need to do what they think best
serves their users. Section 230 is very much a market-based law, in that if a
platform really messes up its moderation — at least in theory — the users will
seek other competitors. And if a platform does a really good job at moderating
what its users don’t like, that will satisfy them and they’ll want to continue
with the platform. That’s basically the theory that Section 230 operates under.

So there’s no requirement of neutrality. And in fact, even
if there was a requirement of neutrality, there would be some significant First
Amendment issues associated with that.

There’s a
phrase in the law about “good faith.” And I’ve heard that if these
companies are politically biased in their moderation efforts, then they’re not
engaging in a “good faith effort” as outlined in Section 230, and
therefore they’re in violation of Section 230. So again don’t receive any
protection if they’re not operating in good faith, and good faith would be
violated by political bias. Is that valid?

So Section 230 has two main provisions. One is the portion
of the law that the vast majority of Section 230 cases have been decided under,
and that’s what I say are the “26 words that created the internet” —
the portion of the law that says if you’re an interactive computer service
provider, you are not treated as the publisher of content that’s provided by
someone else. That does not have a good faith requirement.

Facebook CEO Mark Zuckerberg testifies before the House Judiciary Committee’s Subcommittee on Antitrust, Commercial, and Administrative Law, July 29, 2020, via Reuters

There’s a second provision that says if you take “good faith” efforts to block access to a long laundry list (including “otherwise objectionable” material), you’re also not liable for that. Now, even in cases involving content moderation, when it has come down to a Section 230 issue, courts often will resolve it under the 26 words, because that does not have a good faith requirement. So we don’t have very much case law determining what good faith means, but it’s often irrelevant.

Also, for moderation-related cases, there have been some
cases where platforms have been sued for doing things to block users or block
content, and it has not even come down to Section 230. It’s a First Amendment
issue, because the platforms are sued for violating the First Amendment rights
of their users. And what the courts have repeatedly said is: A private company,
like a social media company — while it does have a tremendous impact on speech,
we can’t dismiss that at all — is not subject to the First Amendment. If the
government restricts speech, you could have a First Amendment claim, but if a
social media site restricts speech, you don’t have a First Amendment claim
against them. It’s a longstanding legal concept known as the “state action
doctrine.” So Section 230 often doesn’t even come into play, because
there’s frankly just not a cause of action to sue under.

When you were
describing what problem Section 230 was originally meant to solve, you
mentioned a lot of concerns about what kids are seeing on the internet,
primarily pornography. So I think a lot of people think that’s what content
moderation should be about. And indeed, in the law, it does list a laundry list
of things — it lists “obscene, lewd, lascivious, excessively violent,
harassing, or otherwise objectionable” content. Some people interpret that
list to mean that political viewpoints should not qualify. They say, “If
you want to restrict violent or lewd material, fine, but not political
material. That is not in that list, and therefore it does not fall into the
category of permissible content moderation.”

There are a few things associated with that argument. The
first is that when you actually look at the claims where platforms are alleged
to be politically biased, it’s often a little tougher than just saying they
don’t like one political viewpoint or another. A lot of times, it comes down to
things like hate speech, and a platform might say, “We think that this
statement violates our hate speech policies.” But then the person who said
it and many of their defenders would say, “No, this is just a heated
political debate and you’re blocking a legitimate political viewpoint.”
These are tough questions, and I’m not weighing in on which side is correct,
but I am asking, “Could hate speech qualify as ‘otherwise objectionable?’”
That’s one issue.

The second issue gets back to the “what is the cause of action?” question: How could you successfully sue a platform for saying, “We don’t want this kind of content on our private property”? That’s a really tough question. Perhaps if the platform had a terms-of-service policy that it violated — i.e. it had previously said, “We allow all viewpoints” — but they could solve that by just changing their terms of service. That’s why it’s a very real issue.

I don’t want to minimize people’s concerns. We have a few
large companies controlling really significant avenues for speech, and their
power is very different than it was in 1996. And I think that too many times,
people who are just defending the tech sector at all costs don’t see that. Twitter,
Facebook, and YouTube have huge control over people’s livelihoods — there are a
lot of people whose livelihoods depend on their social media accounts. You
didn’t have that in 1996. While CompuServe, Prodigy, and AOL had some power,
you probably weren’t going to lose your livelihood if you got kicked off one of
them, but you could now.

So I think it’s a very real problem. But the issue is: Is
there a legal solution with or without Section 230? And you can do all you want
to Section 230, but I don’t think it’s going to address the concerns that
people are raising. If anything, changes to Section 230 will actually make it
more difficult for people to get their viewpoints across.

To follow up a bit on the political viewpoint issue, what you’re saying is that oftentimes, what people pointing out that a political viewpoint is being banned, what’s typically not banned is someone’s views about tariffs, marginal tax rates, or entitlements. What is being banned is something that the company might perceive as hate speech, so therefore it’s easier to see how that might fall under the “otherwise objectionable” part of the law, right?

That goes for hate speech or misinformation. That’s a tough one, because you ultimately have a private company making decisions such as: “Is this claim about coronavirus correct or not? What do we do about it? Do we leave it up with a label? Do we take it down altogether?”

And content moderation is really tough. When I talk about
Section 230, I’ll often get someone saying, “We have really smart
artificial intelligence (AI). Why don’t you just throw AI at the problem?”
And that’s not going to solve these issues, because oftentimes you could have
two really well-reasoned arguments as to why it should either stay up or come
down and you ultimately have to use some discretion. That’s what one of the
reasons for Section 230 is: recognizing that we need to give a little breathing
room for this discretion.

Yes, I think
when people say, “Let AI solve it,” it’s often from people who don’t
know exactly what AI is, and they’re treating it almost like it’s a magic
spell.

Are there
kinds of political viewpoints, then, that you think probably would not fall
under “otherwise objectionable”? To use my example from earlier, if a
company wanted to ban a certain viewpoint about budget deficits — they don’t
want any content about balancing the budget, they like budget deficits — could
they do that?

So I think we’re going down a little bit of a rabbit hole
with this “otherwise objectionable” thing, because it’s a separate
part of Section 230. Section 230 (c)(1) is the “26 words.” Section
230 (c)(2) has that “otherwise objectionable” list. Section 230
(c)(1) does not depend on (c)(2). So the vast majority of litigation benefits
of Section 230 have been protecting companies from liability for content that
has been on their site, not content they’ve taken down. That is why the
platforms rely on Section 230. Those protections don’t rely in any way on the
“otherwise objectionable” and good faith provisions that are in
another part of Section 230. The courts just have not linked those two.

Beyond that, even if there was a moderation case, I have
not yet heard a case for what claim someone would be able to make against the
platform. We could keep debating what’s in the “otherwise
objectionable” provision, but that really misses the mark — even if you
conclude something wasn’t in good faith or wasn’t “otherwise objectionable,”
that’s not going to get you to a point where you’re increasing the liability of
companies who are moderating content.

I remember
the example of an online knitting company that wanted it to provide a
Trump-free knitting forum. Were there any Section 230 issues there? Could there
potentially be a Section 230 issue with someone who decides they want to run a
forum that doesn’t include one particular viewpoint, as that company apparently
did?

So a quick disclaimer: I’m only speaking on my own behalf
and not the Defense Department. I’m in my personal capacity.

With that said, the First Amendment gives a private
company the ability to make these decisions. So again, this is not a Section
230 issue. This is a First Amendment issue, and so a platform can do that.
Section 230 might be invoked to be able to get the claim dismissed a bit
earlier, but it is not going to give grounds for a claim. That’s because you
don’t have a First Amendment claim against a private company, whether it’s a
knitting forum or a gigantic social media site.

There’s a limited exception for companies that basically
carry on a public function, but it’s really limited — for something like a
company town. And based on the recent case law, including a case from 2018 from
Justice Kavanaugh, that’s a very limited exception. And I think there’s just
not going to be a cause of action against them, because the First Amendment
gives private companies really significant flexibility to make those decisions.

Now, one thing we might be getting at — and I think this
is really perhaps what a lot of the Section 230 critics and tech company
critics are getting at — is the belief that these Big Tech companies are biased
and that we shouldn’t have a law that gives them special protection, that it’s
an issue of fairness. While there are real equity issues here — again, I don’t
think they should be minimized — one thing I’d say to that is Section 230
doesn’t just apply to the Big Tech companies.

I’ve been
told that they’re getting a special exemption — that this is a special deal
just for Big Tech companies.

No, it applies to anyone with a website that hosts user
content or anyone with any app or any interactive computer service. It’ll apply
if a local newspaper has a user comment section on their website, and has
applied to them.

It’s for
Twitter, Facebook, the newyorktimes.com, nationalreview.com, the Federalist,
all these sites?

Yeah, anyone who has user content.

Now there are some sites that have made the choice not to
have user content, and everything is produced by the companies. Of course,
that’s not going to be covered by Section 230, because the company is the one
that’s actually producing it. But yeah, I think that the question that people
should be asking is: “Does changing Section 230 fix our problems? Will it
actually make these platforms less likely to moderate?” At least based on
the changes that I’ve seen proposed, I don’t think it would — especially when
you get to outright repealing 230 — because suddenly the platforms have
significantly increased liability. We don’t know exactly how much, because the
case law is not really well-developed outside of Section 230, but there at least
will be more liability than we’ve had before by a fair amount. And so I don’t
think that the reaction of platforms is going to be, “Well, let’s start
allowing more controversial speech.”

Sen. Ted Cruz (R-TX) questions Twitter CEO Jack Dorsey remotely during a Senate Commerce, Science, and Transportation Committee on Capitol Hill in Washington, DC, U.S., October 28, 2020, via REUTERS

Right. I
think there has been very little serious thought by people who want to reform
or repeal Section 230 about what the internet space looks like in a post-230
world. I think people imagine letting a million flowers bloom, expecting a
great diversity of viewpoints.

But the other
scenario, which you just outlined, is: You get a very boring internet and very
boring social media sites. Everything will be very locked-down, and you’re
going to get a bunch of people posting cat pictures and things like that, and
that’s about it because they’d be very worried about liability.

Exactly. Also, if you’re concerned about Twitter and
Facebook having power now, this would likely be worse without Section 230. I’m
pretty certain that Facebook and Twitter are going to survive whatever new legal
standard emerges without Section 230, because they’re big and they have very
large Washington DC staffs. Many of the people on the staffs have worked in
Congress and the executive branch, and I’ve been in Washington DC long enough
to know that big companies with big DC offices tend to be able to influence how
things go better than smaller companies that don’t have a large DC presence.
And they also have more money to be able to have whatever moderation is
necessary and whatever technology is necessary to meet whatever new standards
there are without 230.

Meanwhile, the companies that want to offer alternative
platforms that might have a lot of users but not really a substantial amount of
money yet because they’re in startup or the mid-size growth phase, so they
might not be able to afford those standards. That’s why I think that it could
lead to even more consolidation of where you can have your views.

Now, I don’t think it’s a terribly healthy environment
that we have right now, where we have a few platforms that are the dominant
venues for user content — I would very much prefer to see much more diversity
than we have right now and not so much domination by a few platforms. But I
don’t think that getting rid of Section 230 is going to be what solves it. That’s
my big concern: I don’t want to see even further consolidation of venues for
opinion.

Does Europe
give us any clue about what a non-Section 230 world would look like? Because I
don’t think they don’t have their own version, do they?

No, they don’t. It varies a bit by which country you’re in. But generally, there are actually so many more restrictive court cases, and if a platform is made aware of user content that is alleged to be defamatory or illegal — and they have hate speech laws and other things that we don’t have in the United States — their choice is either to take it down or defend it in court. A rational platform is going to say, “Okay, I’m going to take it down.” That is very well likely the scenario that you’d have in the United States without Section 230 — and in fact, it’s probably the best-case scenario for the platforms.

There’s an even more dangerous scenario that comes from
one court case that I don’t think was very well decided right before Section
230 which said, “Okay, if you are a platform and you do any moderation,
then you’re responsible for all of your content regardless of whether you knew
about it or not.” And I don’t know if that would actually be what carries
today, but that would be a really dangerous scenario in the United States. I
think there would be a lot of platforms that would just say, “Okay, we’re
not going to take the risk of having user content in that circumstance.”

Obviously,
there’s a lot of frustration about content moderation policies. People say,
“Oh, if only they were more transparent.” Is that a sufficient
criticism? It seems like it’s one thing to be transparent about a set of
policies, but it seems like the policies are fluid and ever-evolving — that
we’re still in some sort of trial-and-error process to figure out how to
moderate content. So, what should these companies be doing differently?

I think it is a valid criticism. Especially until maybe 2016
and 2017, this was really a failure of the companies, in that they operated so
secretly that you really had very little insight into how they made their
decisions and why they were making their decisions. They’ve become far more
transparent than they ever were before in terms of explaining their processes
and having more detailed policies — I think partly because you’ve had much more
public focus on both the platforms and Section 230. That’s a great thing,
because you’re never going to satisfy everyone with content moderation. There
are different types of decisions. There are some things like child sex abuse
imagery where it’s a pretty clear decision as to what you need to do to
moderate that.

But when you get down to things that are more at the
margins — like heated political discussions or things that may or may not be
disinformation — it’s very useful for the platforms to very clearly explain,
“These are what our standards are.” You might not agree with the
standards, and that’s fine, but it’s good to at least have an explanation of,
“Okay, this is why we took the action that we did.” Now, that’s hard.
Take Twitter — you have thousands of tweets per second. Even if you’re only
taking action on a fraction of those, there’s going to be a lot of different
scenarios, a lot of different contexts in which you’re making your decisions,
and it might be difficult to satisfactorily explain that. But to at least give
an idea of what your framework is in making those decisions could be a big
improvement over making these decisions without a full explanation.

Are there any
substantial changes you would like to see which would make this a better law?

There are a few things. I’ll caution that I’ve been in DC
for too long, because I think my first idea is to have a congressionally
chartered commission. It’s a very DC answer to say that, but I feel like one
thing that has come out during this discussion is that there’s a lot of
misunderstandings both about Section 230 and content moderation. And I think
part of the issue is, because we lacked transparency for so long, that we
really don’t have a tremendous amount of insight as to what’s possible and what
the different reactions would be to some of these legal changes. So we’re kind
of throwing out all of these Section 230 proposals without having sufficient
insight as to what we’re actually changing and what the effects would be.

Section 230
is not that long of a law — this isn’t Obamacare or Dodd-Frank. It’s a fairly
brief law that’s been around for a long time, and yet those who are confused
about it seem like they are impervious to explanation at times. Is it that
complicated, really?

No, it’s not as long as the Affordable Care Act but I would
wager that it’s — depending on how you look at it —probably as misunderstood,
if not more so. I think that’s because even though it’s a short law, it’s not
necessarily intuitive how it works, and there are a lot of moving pieces. You
have to get into common law theory about what the liability of bookstore owners
is, and that’s not in the law, but you need to understand that to understand
what the impact is of changing 230. So I’d like a commission similar to the
Cyberspace Solarium Commission that we just had.

Just to
establish: Here are the facts, here’s what the law says, here’s what previous
court cases have said. This is so we can operate from a common base of
information?

Yeah, and also to gather facts about what the platforms
do. I think that would be highly useful.

There are also some changes to 230 that, pending any
fact-gathering, I think would be good to consider. They’re technical changes,
but here’s the most important one: Let’s say that you posted something
defamatory about me on Facebook. Section 230 would prohibit me from
successfully suing Facebook for what you posted even if I complained to them
and they didn’t take it down. But I could still sue you. And if I sued you and
I got a judgment — including a court order that it was defamatory — at least
under the way that the California Supreme Court has interpreted Section 230,
they say that it extends to those collateral orders to take down the material.

The Twitter and Facebook logo along with binary cyber codes are seen in this illustration taken November 26, 2019. REUTERS/Dado Ruvic/Illustration

So at least right now, the way the California Supreme
Court reads it is that I couldn’t use an order in my case against you to force
Facebook to take it down. Now, many platforms do take down the material if they
do get a court order, but they don’t have to, and not all do. And I think as
long as we can have a way of validating that it’s actually an order issued by a
court, I think that Section 230 should not in any way prohibit the takedown of
material that has been adjudicated to be defamatory or otherwise illegal.
There’s just no rational explanation. As long as you can validate the
authenticity of the court order, there’s no valid explanation for why we should
protect that.

This is important for the individual plaintiffs: people
who have had horrific things written about them that are ruining their lives
and they can’t get the platforms to take it down. My biggest concern is giving
them a mechanism to have it taken down. Yes, a lot of Section 230 cases are
companies that are upset that a consumer was angry and wrote something bad
about the service that they received — and I’m not so sympathetic about that, I
think consumers can and should have outlets for that — but I am sympathetic if
it’s a person who has someone who is angry at them and has basically ruined
their lives. If this is haunting them online, they should have a mechanism to
get that taken down.

Last
question: It seems that folks on the right worry that there’s too much — and
too biased — moderation. I think people on the left worry there’s too much hate
speech and disinformation, and that these companies should be more aggressive
in their moderation. So both sides seem to want something very different from
their internet, both are talking about Section 230, and I don’t know how much
crossover there is among what they want to do. Do you expect any substantial
changes to this law over the next few years?

I think it’s likely that there will be changes to Section
230. But I don’t know what those would be because I think you’re right that
there are very different criticisms of the tech companies, and I think most of
it is focused on the Big Tech companies. And I think the changes that are being
proposed by both sides don’t really mesh well together. A possible outcome is
that the compromise becomes “repeal Section 230 altogether,” because
you do have people on both sides saying, “Section 230 is the problem, get
rid of it.” You have people who say there’s too much moderation so we need
to get rid of 230, and people saying there’s too little moderation and we have
to get rid of Section 230.

I don’t think that we’re repealing Section 230 would
necessarily address many of the concerns on either side, but I do see that as
possibly what would be the easiest compromise.

That would be
a fascinating experiment.

It would. What I think platforms should do is have a Section-230-free day and operate as though Section 230 is not on the books anymore just to show folks what their experiences would be like without Section 230. I think it would be a pretty interesting experiment.

My guest
today has been Jeff Kosseff. Jeff, thanks for coming on the podcast.

Thank you.

The post In defense of Section 230: My long-read Q&A with Jeff Kosseff appeared first on American Enterprise Institute – AEI.