Regulating AI, Hypothetically and in Reality

Let’s consider two hypothetical scenarios.

First, an artist creates a new work. Copyright law protects the artist’s intellectual property, allowing the artist to release the work for public enjoyment. Copyright law also permits others to make derivative works based on the original piece. Society is enriched from the existence of both the original and the derivative works.

But what if another law existed, holding the artist legally liable for all or any offence or harms incurred by any of the derivatives created from the original? This would surely have a chilling effect on the number of new works created. Few would be released for public enjoyment. The creator would retain tight contractual controls on who could view or use the work, and the creation of derivatives would become the exception rather than the norm. Society would be very much culturally poorer as a result. Consequently, such derivative liability laws are extremely uncommon.

Second, a manufacturer makes a good intending it to be used for one specific purpose, say, a motor vehicle to be used for personal transportation. The manufacturer is required to take due care to ensure that the good is fit and safe for its intended purpose.

But what if a law existed, requiring the manufacturer also to be responsible for anticipating and preventing the use of the good for any number of purposes other than the original one? For example, being legally liable for not anticipating or preventing the use of a motor vehicle manufactured at the plant for transporting a suicide bomber and a bomb into a crowded marketplace? If such a law existed, then only when the manufacturer can maintain strict contractual control of all items once they leave the production line will any be made available for use. Creative beneficial uses, as well as harmful ones will be eschewed; once again, society is the loser. So, such laws don’t usually exist.

Yet precisely such provisions characterize both actual and proposed regulations constraining the development and deployment of artificial intelligence (AI) applications. They will have precisely the expected chilling effects on the AI sector and will deprive society of significant benefits if they remain in place—something anticipated already by opponents of the California AI Act SB 1047.

Most AI regulatory regimes (e.g. the EU AI Act) or voluntary standards (such as the NIST AI Risk Management Framework) require AI developers and deployers to anticipate potential risks arising from their applications, maintain active monitoring of their use, and, in the California case, to be able to intervene and shut down the applications in the event of a sufficiently adverse event occurring. To satisfactorily meet this requirement, an application can never be let out of the direct control of the original developer. The developer cannot make the application publicly available in an open-source market for further innovation to be undertaken, for fear of being held liable for the consequences of another’s subsequent actions. Neither can the producer confer a “free ownership stake” (as per Demsetz’ property rights) where the owner is free to exercise control choices as occurs in the sale of a manufactured vehicle for fear of it being used for a purpose not already considered in the risk management framework. Such regulations can succeed only if these applications are available within tightly closed communities, with limited scope of entry or threats to control. Hardly a vibrant environment conducive to continual innovation. Maybe this is why some AI developers are not averse to increased regulation in their sector?

However, the AI application ecosystem has already moved far beyond a simple structural model of production and sale of manufactured goods that can be controlled using risk management-based safety regulation. Many AI developers both use open-source elements as inputs to their applications, and they supply their outputs back into the open-source software communities. They do so in full anticipation that the models will be adapted and improved in order to generate new variants to benefit society, in just the same manner as the creators of copyright works anticipate their works can be adapted to create new and different benefits. They do not control all elements of their input supply chain. Neither can they control all downstream uses and users, so it is futile to endeavour to hold them to such account.

Regulators and legislators must recognize that if AI is to deliver on its innovative and wealth-enhancing promise, developers alone cannot be held responsible for all consequences. If they are, then the chilling effects will not simply be hypothetical; they will be real.

The post Regulating AI, Hypothetically and in Reality appeared first on American Enterprise Institute – AEI.