Can AI Regulation Really Make Us Safe(r)?

In the late nineteenth century, the latest new-fangled invention was the motor car. In both Europe and the United States, regulations required a man waving a red flag to walk ahead of the car to warn road users and bystanders of the presence of the car. This also ensured that the car could travel no faster than walking pace, despite the primary known benefit of the “horseless carriage” powered by an internal combustion engine was that it could travel faster than the horses it was set to replace (that it didn’t foul the streets as the horse was well-known to do, creating a health and safety hazard, came a very close second in the benefit stakes).

This is a classic example of the Precautionary Principle in practice: when implementing a new scientific or technological innovation, the effects of which are to some extent unknown, we should proceed with caution, at least until we know more about the innovation (the scientific endeavor with its perpetual quest for new knowledge is geared to ensure we do). This does not mean the technology should be banned – especially if there are substantial societal benefits to be gained. Rather, it should be implemented with some guard rails in place, to protect individuals and society from the known harms and uncertainties inherent in its the implementation.  So long as the societal benefits are large, and the costs of the potential harms minimized by the precautions taken, then implementation can be deemed acceptable to society.

artificial intelligence and chat GPT
Via Adobe

However, the “man with the flag” illustrates that the nature of the precautions taken stem from what we already think we know about the technology’s harms and benefits, not by what we can’t (or don’t yet) know about it. And even then, sometimes obvious benefits are eschewed because society is not prepared to tolerate the costs of some risks already presumed known. But humans are boundedly rational. They don’t always get the balance right. Often this occurs because the risks regulators seek to manage are those they already know well, rather than the uncertainties the innovation poses. Because managing for uncertainties means the innovation would never be permitted at all. 

A well-known bias in decision-making under uncertainty is for a decision-maker faced with an opaque complex situation to substitute a situation for which the remedies are already available. Again, the “man with flag” illustrates. Contemporary regulators were well aware of dangers to the public attending runaway horses and carriages. The likelihood of runaway occurring was increased with the speed of travel. Restraining car speed protected the public from the risk of harm caused by a runaway car. Warning bystanders of the advancing vehicle also prepared them to proceed with caution themselves. It used current knowledge to address a known current risk (of horse-powered transportation)

But ironically, the motor vehicle driver (once trained) exerted significantly more control over the vehicle than the horse handlers. And the public had more to fear from the sometimes-unpredictable behavior of self-willed horses than from an internal combustion engine with no will of its own. The regulations addressing fears related to horses both delayed the gains from faster travel and engendered a false sense of security in the public: as long as the man with the flag warned them, they had no need to learn for themselves how to manage their own behavior in the presence of the motor vehicles going faster than horses – a very necessary skill for when the regulatory rules were relaxed.

These lessons are prescient for twenty first century regulators of AI technologies and the public arguably “protected” by them. Both the European Union and United States regulatory regimes take a precautionary risk management approach. But are the risks being managed those relating specifically to the features of the new technologies, or are regulators using proxies derived from experiences with other technologies, because these are “known” to “work” in their original context? Risk management tools used in AI regulation, by their very nature, presume known consequences and quantifiable risks of new technologies. But is this really the case? We may not always be making the environment any safer from the real underlying risks the new technology poses yet delaying the accrual of benefits at the same time as giving the pubic a false sense of security and discouraging their adaptation to a new environment.

By way of warning: in 2024, we now think we know that the existential risk of motor vehicles was carbon emissions from their fuel (albeit unexpectedly, the replacement of horses created a shortage of agricultural fertilizer, necessitating innovative use of fossil fuel by products to plug the gap) … 

The post Can AI Regulation Really Make Us Safe(r)? appeared first on American Enterprise Institute – AEI.