Competition in AI Regulation: Essential for an Emerging Industry

Competition is beneficial in many areas of life. It can lead to lower prices, higher quality goods and services, more innovation, and greater variety. When companies compete for workers, they may also improve working conditions and increase compensation. Healthy competition can also lead to good relationships between businesses, which can result in mutual benefits like networking and sharing content. On a personal level, competition can drive people to perform at a higher level and learn faster. It can also teach people to manage their nerves and bring their best effort. 

via Twenty20

In a competitive market, innovation leads to the development of different products and services. Those offering combinations of features that consumers prefer survive and go on to be the basis for further development and innovation – including adaptation by others as they imitate successful features. Those not making the grade or unwilling and unable to adapt cease to be produced. Darwin noted the same process in the evolution of species. And unsurprisingly, so too does competition drive development and innovation in societal norms and institutions.

Such institutional competition and evolution are evident in sector governance and regulatory arrangements. When an industry is new, the optimal sector governance arrangements are unknown to all concerned. Firms, markets and governments must experiment to see what arrangements will work best. Just as with the actual products and services, they don’t always get it right first time. Competition helps sort out which arrangements work best for the firms, for consumers and for society. When multiple different variants compete, they reveal information about which will serve society best going forward. Nowhere is this more evident than in the competition between the EU and the US in the rules governing broadband markets. The stringent EU open access and net neutrality rules led to less and later investment in next-generation fixed line technologies than observed in the US where less stringent rules have applied.

A real risk at the outset of any new industry is that powerful governing entities will use their coercive powers to shut down competition in governance arrangements just as powerful commercial entities may crowd out their product and service rivals. Less capable variants can survive and come to dominate on the back of these powers, with society unable to benefit as much as if the best arrangements could prevail.

These very concerns should be top of mind as governments, markets and firms grapple with the new emerging artificial intelligence (AI) products and services and look for the best ways of organising the firms, supply chains and (via regulations) markets in which these will be created and exchanged. This is particularly true of the new generative pre-trained transformers (GPTs) including large language models (LLMs) such as ChatGPT and image generators such as Custom GPT.  

The worst possible outcome would be for a rapid convergence to a single global set of standards and regulations governing all forms of the new technologies. Such arrangements put all the AI regulatory eggs into a single basket, long before it is clear exactly where the trajectory of the industry is headed. The risk is that if the arrangements are not optimal, the emergent industry will be cut off (or its development tightly constrained in a way that reduces incentives to innovate and bring new products and services to market) for the foreseeable future.  

Yet this is precisely what may come from the European Union’s AI Act. By “going early” and developing its regulations in advance of other jurisdictions, some hope that via the “Brussels Effect” these will become the global standard for AI’s future. That is, while the EU struggles to develop and market its own AIs, it will, by market or de jure actions, see its rules incorporated into the market behaviors and laws of other jurisdictions. Already, Meta has announced that it will not launch its AI chatbot assistant in Europe because compliance with European data protection laws precluding training on European data, potentially leading   to a second-class product. Apple has responded similarly.

Fortunately, the US has taken a less prescriptive approach. The National Institute of Standards for Technology guidelines are not mandatory for private sector firms. They compete with voluntary guidelines developed by multi-stakeholder groups such as the Frontier Model Forum and the AI Alliance, not to mention international initiatives such as the G7’s Hiroshima AI Process.  

Such industry-specific self-regulation, where firms can opt to comply with one or more of these sets of standards, fosters exactly the sort of competition in rulemaking that will lead to the best sets of arrangements prevailing. As the industry matures, then it may become clear that one set of rules outperforms others. This can become the basis for formalization in state and federal laws, if necessary.

The post Competition in AI Regulation: Essential for an Emerging Industry appeared first on American Enterprise Institute – AEI.