AI Regulation, Preemption, and Democracy

Sometimes a hearing about transparency ends up being about free speech. A recent meeting of the House’s bipartisan task force on artificial intelligence ended up being about democracy, for me. In tech and in general, I believe our country needs to slow or stop the aggrandizement of federal power that is eating away at our wonderful system of government.

Earlier this year, the speaker and democratic leader of the House set up a bipartisan task force to look into artificial intelligence. The task force is chaired by Reps. Jay Obernolte (R-CA) and Ted Lieu (D-CA). Kudos to all involved for a project at odds with Congress’s increasing reputation for unproductive and vitriolic partisanship.

via Adobe Creative Commons

But on the legislative merits, warning lights and sirens should be going off, as nobody seems to have a handle on what AI is. People are discussing the congeries of databases, processing techniques, and computing tools as though it is a discrete thing. 

I am dubious of AI as a concept. It is essentially a marketing term. Regulation aimed broadly at this topic will almost certainly be distended, ineffectual, and costly if there are no sharp delineations of what any regulation covers and what precisely it prohibits or requires.

An issue of particular interest to Rep. Obernolte, I take it, is what states’ involvement should be in regulating AI. I was pleased to speak at the task force session on that topic. Federalism was the issue that brought me to Washington almost 30 years ago. 

I see the tension this way: On the one hand, we should preserve state power, because the more local level of government is superior in democratic terms. On the other hand, states’ AI regulation is likely to degrade the efficiencies of a large national market. Which do you want? A better democracy or a more efficient business environment? 

I have openly doubted that Congress should take power over the privacy issue from states through a national privacy law. This is because I don’t think people would get better privacy but also because ever-increasing federal power tears up the agreement in our Constitution to have relatively decentralized government in these United States. The idea of distributed power is right there in the name “United States.” As I wrote in a recent blog post, “The ‘rejectionist’ strain in American politics epitomized by the Trump insurgency may be borne at least partly of a remote Congress and even more remote bureaucrats officiating highly personal interests and values like privacy.” 

That post introduced a paper that I recently cowrote with Geoff Manne of the International Center for Law and Economics arguing for an alternative to federal preemption of state law. Contractual choice-of-law provisions could allow businesses to avoid an inefficient regulatory patchwork while fostering a double competition: among businesses to please consumers and among states to supply the best law.

That thinking doesn’t apply directly to AI, which is an array of tools, not a set of values, as privacy is. I’m more inclined toward preemption of state AI regulation. Indeed, states might rightly be excluded from regulating AI as such, because it’s a cardinal regulatory sin to address oneself to technologies rather than outcomes. 

That idea—technology-neutral regulation—solves the problem highlighted above. You don’t have to figure out what AI is if you avoid tech-specific regulation and just work to reduce bad outcomes produced by any technology. Congress should limit itself to technology-neutral regulation as well. 

Leave it to me to get all corny and principled when people are trying to write federal statutes on a hot tech subject. I love making a perfectly exciting issue boring through the application of constitutional and legal theory.

Speaking of which, we have perfectly good, technology-neutral law already, in the form of state common law. Common law is what requires people to keep their promises, keep their hands to themselves, refrain from stealing others’ stuff, and pay damages to victims they harm, whether they do it with cool tech like AI or with an old lawnmower. Common law offers local control, but it is largely uniform nationwide.

If stemming the flow of power to the federal government is boring, I’ll take it. The alternative, wrought by eroding democracy, I’ve witnessed in my professional lifetime: In late 1995, sitting in a congressional office, I took calls from people arguing that the government shutdown authored by President Bill Clinton and the new Republican House should last forever. In early 2021, people of a like sentiment marched to the Capitol, and some went inside.

But the direct question was, should Congress preempt state AI regulation? As I said to the task force from a rare position of uncertainty, “Hello from the mushy middle.”

The post AI Regulation, Preemption, and Democracy appeared first on American Enterprise Institute – AEI.