Who Is Responsible When AI Bulls Escape into Digital China Shops?

In Germany, Kristallnacht memorial day calls for deliberate reflection on the events of November 9–10, 1938—“the night of shattered glass”—when anti-Jewish sentiment fed by the Nazi Party escalated into coordinated riots, resulting in widespread arrests of German Jews and the destruction of their businesses, synagogues, and homes. The date marks the point where underlying anti-Semitic prejudice transformed into overt policy, and its anniversary justifiably weighs heavily on the minds of Jewish individuals, families, and communities worldwide.   

In modern Western societies, it is now becoming widely accepted that the state and society as a whole have a responsibility to not only proactively prevent abhorrent acts from occurring in the first place, but also avoid any actions that may cause offense or psychological trauma to vulnerable populations or reopen those wounds. One consequence is a demand for explicit policies to prevent the dissemination of potentially harmful content on social media (e.g., the Christchurch Call) or decisions holding platform operators accountable when content they host results in such offense (e.g., Australian court decisions on defamation liability).

It is interesting in this light to consider who should be held responsible when, despite apparent best efforts, content likely to cause harm or offense related to Kristallnacht memorial day 2022 escaped onto a digital platform. The incident arose when content created by a computer algorithm scanning the calendar for celebrations and matching them with product promotions to create push notifications was sent by Kentucky Fried Chicken (KFC) Germany to users of its digital app exhorting them to “Commemorate Kristallnacht—treat yourself to more soft cheese and crispy chicken. Now at KFCheese!”

A stock photo displaying push notifications. Via Adobe Open Commons.

The unfortunate incident highlights the limitations of artificial intelligence (AI) that has apparently not been trained to distinguish between events humans perceive as celebrations—warranting a party—and those that are commemorations—warranting deep, considered personal reflection. Second, it illustrates the responsiveness of both offended recipients, who immediately whipped up a storm on social media, and KFC management, who issued another push notification apologizing for the first message as soon as the unfortunate occurrence was drawn to their attention and promised to do all that was possible to prevent another such occurrence.

However, it also raises the question of who is ultimately responsible. How far should content creators and distributors go to avoid causing offense, and is it even possible to strike an acceptable balance in any case? Clearly, in this case, the algorithm used was not sufficiently sophisticated or well trained to distinguish key differences in important calendar days. No amount of algorithmic transparency (as called for by some) can prevent unexpected outcomes from occurring. It is ultimately a human responsibility to ensure unacceptable content created by AI is not distributed.

However, humans are fallible too. And where potential offense is concerned, we need to consider that while the content distributed may cause offense to some people, it may be perfectly acceptable (or the offensive nuances may not even be detected) by others. While for most Germans and Jewish communities, the idea of partying for Kristallnacht may be beyond the pale, but in violation of Kosher food practices precluding the mixing of milk and meat products (as seen in KFC’s message) will offend observant Jews worldwide. This significance is likely lost on secular individuals unschooled in such traditions.

Consider, for example, a human screener at KFC in this case (if indeed there was one) who may have let the content through. As humans doing the screening cannot realistically be aware of all possible offenses that could be caused to any individual or group, the isolated embarrassment must inevitably be expected. Likewise, it is impossible to ensure that the AI has been sufficiently prepared if the training data cannot be guaranteed to have included all possible examples of offense in the first place.

Of course, in an AI world, firms such as KFC may strive to collect sufficient personal information about their app’s users to customize messaging in such a way that minimizes the likelihood of such offense being caused to specific individuals. But this stands in stark contrast to advocates such as those promoting the Christchurch Call to use algorithms to preclude all material of a specific type (e.g., terrorist content) from being distributed to any individual so as to prevent harm from occurring to vulnerable individuals. Which should be preferred?

In sum, this is a complex situation where simplistic one-size-fits-all solutions simply don’t exist. Perhaps what can be learned from this tragic case is that, if harm can and does occur, processes for prompt response limiting damage are not bad second-best policies—lest the bull continue to run amok.

The post Who Is Responsible When AI Bulls Escape into Digital China Shops? appeared first on American Enterprise Institute – AEI.