Can Computers Catalyze Centralization?

Reflecting on the USSR, Paul Mason’s thesis from Postcapitalism: A Guide to Our Future (Macmillan, 2017) asks:

What if the problem with the Soviet Union was that it was too early? What if our computer processing power and behavioral data are developed enough now that central planning could outperform the market when it comes to the distribution of goods and services?

It is a fascinating counterfactual exploration. The question is especially relevant when considering the apparent disenchantment with capitalism building since the Great Recession and an increasing trend toward more centralized firms (think Google, Amazon, etc.) and institutions (e.g., global governance of taxation and climate change management). Is this sentiment not just reactionary politics but the natural consequence of a revolution in the processing and use of information?

via Adobe Creative Commons

F. A. Hayek observed in The Road to Serfdom (George Routledge & Sons, 1944) that socialism (and hence Communism) requires central economic and social planning (and central planning leads to totalitarianism), but all the relevant information is decentralized. It is costly (if not impossible) to obtain all the relevant information to enable centralized authorities to organize economic and social interactions as efficiently as possible in imperfect, decentralized (i.e., market-based) alternatives. But without a doubt, advances in computers and information processing technologies have markedly altered the cost-benefit trade-offs.

Specifically, with the power of artificial intelligence algorithms and terabytes of data amassed from collecting end users’ engagement with an army of online apps, arguably central entities might now possibly know enough to tip the balance back toward the levels of centralization observed not just in Communist states but also in many Western economies of the pre-neoliberalized 1980s heydays.

Equally, a greater appetite for centralized control has emerged post-2000, as politicians and populations appear engulfed in growing uncertainty on many different fronts: economic, environmental, and geopolitical, to name a few. As psychologists Daniel Kahneman and Amos Tversky and economists Cass R. Sunstein, John Kay, and Mervyn King have demonstrated, when faced with great uncertainty (or chaotic circumstances), humans tend to cede responsibility for making decisions to a “higher” or more centralized power (e.g., political leaders). Moreover, those charged with making decisions tend to be biased toward taking actions—any actions—over waiting to gain more information, even when waiting would have been the rational option. Being seen doing something is an important signal of being in control and exercising strong leadership to those on whose behalf the decision is being made. (Of course, these same scholars also warn about the tendency to overassess the magnitude of low-probability, high-cost outcomes and under-assess higher-probability, lower-cost outcomes, often with deleterious consequences.)

So, are these two forces—lower-cost computerization and uncertainty—leading inevitably to centralization, reminiscent of that observed in the 1930s? And will it succeed this time, when it did not previously, because the cost-benefit equation is different?

The recent natural experiment with centralization offered by COVID-19 offers some insights. To be sure, when a crisis emerges (in the short term), there is usually a need for someone to assume coordination of the immediate response—to preclude the potential for chaos. However, the extent to which a centralized entity can continue to manage all economic and social activity efficiently or effectively from then on must be questioned.

The reason is that all the information gathered and algorithms developed in the past are of little value when confronting a truly unexpected and unprecedented situation. When there simply is no legacy information on the situation faced, centralized authorities are exposed. Centralized systems operate on rules, where one size fits all. They are sustainable when they are low cost to enforce. But when the population is diverse and the rules are not a good fit, then the risk of a breach rises and enforcement costs increase. The more exceptions that are created, the higher the enforcement costs. Eventually, either enforcement costs overcome any benefits from centralized operation, and some form of planned decentralization must occur, or revolution leads to the same result.

The primary weakness of centralized systems in the long run thus lies in their one-size-fits-all approach. When uncertainty and diversity prevail, “one size” is unlikely to be optimal. Variety, fundamental to decentralized systems, is costly, so it isn’t usual in centralized systems. Portfolio theory suggests that multiple, small low-cost options offer a better chance of finding the best response in an uncertain environment; the “best” can be promoted once identified.

Finding the sweet spot on the centralization-decentralization continuum is never easy and remains a moving target. Allowing it to evolve seems better guidance for governance design than trying to prescribe it in advance, as the future is always uncertain. Joseph Stalin would be sorely disappointed.

This post is based on an invited address to the Law and Economics Association of New Zealand, of which the author is a Fellow, on December 6, 2022.

The post Can Computers Catalyze Centralization? appeared first on American Enterprise Institute – AEI.