Mandated Platform Transparency: Speech Regulation is in the Air Everywhere

By Jim Harper

Note: This post and all affiliated content are part of AEI’s Online Speech Project. To learn more about the project, click here.

I testified this week before the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law on the subject of internet platform transparency. The hearing centered on draft legislation called the Platform Accountability and Transparency Act (PATA). We’re all for transparency, right?

I went there to emphasize the privacy consequences of broad disclosure. PATA would give the National Science Foundation and Federal Trade Commission (FTC) essentially limitless power to require platforms to share data with government-approved researchers. The FTC would create privacy and cybersecurity requirements for the data disgorged to researchers. The platforms disclosing the information and the researchers getting a hold of it would then be insulated from any liabilities if they complied with the FTC’s regulations.

Jim Harper testifying before the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law on May 4, 2022.

Privacy is a real issue, though, because part of privacy’s
essence is having control of information about oneself. When the government
takes information someone has deposited with a platform pledged to protect
privacy and gives it to someone else acting under a different set of rules,
that divestiture of control itself arguably violates privacy.

But never mind privacy. I came away impressed by how
comprehensively the advocates of platform regulation want to regulate speech.

First, a digression on whether platforms engage in speech.
As I put it in my testimony:

The internet and social media are strange but real descendants of the printing press, disembodied and given to everyone to use as much as they want. Social media companies aggregate and augment this mass exercise of expression.

The PATA legislation seeks disclosure of “algorithms,” which
it describes as computational processes for

determining the order or manner that a set of information is provided, recommended to, or withheld from a user of a platform, including the provision of commercial content, the display of social media posts, recommendations of user or group accounts to follow or associate with, or any other method of automated decision making, content selection, or content amplification.

I would summarize that definition as “automated editorial
choices.”

Stanford University Law School professor Nate Persily is a leading proponent of the PATA legislation—so much so that he was quoted alongside its senatorial sponsors in their release announcing the bill. Persily said in his opening statement that the first and second aims of the legislation are to control the editorial choices of platforms:

The first set of purposes and goals, I think, of transparency, which is sometimes undersold, is that it will actually change the behavior of the firm. To some extent, I get criticism a little bit. When you emphasize transparency that it’s seen as sort of weak legislation because it’s not, you know, it’s not breaking up the companies, or it’s not going right after content moderation. But once the platforms know that they are being watched, it will change their behavior, alright? They will not be able to do certain things in secret that they’ve been able to do up now.

Second, as Sen. Coons mentioned, I think it will lead them to change their products, right? Because once we have a greater appreciation for what’s actually going on in these firms, those on the outside can do research that a lot of the insiders are not doing on their products.

There are plenty of problems that our social media platforms
create or exacerbate. (The error in saying it that way is assuming that the
platforms are responsible and not their users. Responsibility ranges widely
depending on which problem you’re trying to address: disinformation, extremism,
children’s and teens’ mental health, or something else.)

I’m dubious of the premise—as I put it in my testimony—that
“private, competitive communications platforms are a public resource through
which researchers and government can tune society.” My sense—again expressed in
my testimony—is that

personal, family, and community responsibility is what gets us through these challenges. I welcome orthodox university research as one part of finding solutions, but it is as likely that we collectively absorb what the new media environment means and develop strategies en masse for getting what is good from social media while mitigating their ill effects.

I thought going in that platform transparency was chiefly aimed at finding solutions to Internet pathologies. But a principle aim of PATA and the transparency effort is to change both the behavior of platforms and their products. This is not direct regulation of platforms’ editorial choices, but it is as close as you can get indirectly. There is a rich body of First Amendment doctrine on “chilling effects.” The text of the amendment bars “abridging” free speech. Abridgment is what “chilling effects” doctrine fleshes out.

The effort to force transparency on platforms is thus a
privacy issue and a free speech
issue. My testimony included how to improve transparency through market
pressure and by decreasing the regulatory pressures that probably make
platforms risk averse.

The post Mandated Platform Transparency: Speech Regulation is in the Air Everywhere appeared first on American Enterprise Institute – AEI.