The challenge of encoding values in AI

By John P. Bailey

The White House’s Office of Science and Technology Policy
recently launched a process intended to establish a “bill of rights” that would
provide consumer protections around artificial intelligence (AI) technologies.
The Joe Biden administration deserves credit for jumpstarting this conversation,
given that many of these technologies are outpacing policymakers’ understanding of how they work, much less their ability to
devise regulatory frameworks around them. However, there is a risk that the
administration’s approach could reflect one of the problems they’re attempting
to address: bias.

At the center of the administration’s effort is a plan to
develop a set of protective measures against disparate outcomes. Training
machines based on past examples can actually embed biased data into models that
then produce biased outcomes. For example, Google
Vision Cloud
’s AI system labeled an image of a dark-skinned individual
holding a thermometer as a “gun” while a similar image with a Caucasian
individual was labeled “monocular.” One large study found
differences in mortgage approval rates among underserved groups due to minority
and low-income groups having less data in their credit histories, which
complicated the algorithms used by lenders.

But the administration’s effort misses a more difficult
challenge of how to determine the specific values and ethical norms that
machines must be trained on or encoded with.

This tension can be seen through the experiment conducted by
the Massachusetts Institute of Technology’s Moral Machine. Here,
individuals are presented with moral dilemmas in which an autonomous driving
car must choose the lesser of two evils, such as killing several (or all) of
its passengers or five pedestrians. It is the classic
trolley car thought experiment
but with several nuances. Should the car be
programmed to always save passengers, or are there times when the passenger
should be sacrificed to protect a group of pedestrians? Are there differences
when the individual is a baby versus an elderly person? Or a businesswoman
compared to a homeless person?

Source: Massachusetts Institute of Technology’s Moral Machine

The website recorded 30 million decisions made by over 3
million individuals. In a published study, researchers
found strong moral preferences for sparing more lives and saving younger
individuals. But the experiment also surfaced some important differences among
political ideology, gender, and regional lines. For example, progressives were
more utilitarian in their decisions, meaning they were less inclined to save
passengers. Conservatives were more in favor of saving individuals of perceived
high social value and saving humans over pets.

Interestingly, there were differences among respondents in
different states. Residents in Mississippi had stronger utilitarian values than
those in California. Florida residents tended to favor saving the passenger in
most cases while Oregon residents had the opposite preference.

Source: Nature.com

While this framing focused more on the extreme, rare edge
cases of autonomous driving vehicles, it does provide a glimpse into one of the
greatest challenges confronting AI — how to resolve different ethical
preferences shaped by deep cultural differences that vary across regions.

This is the blind spot with the administration’s approach.
Traditionally, the institution of the White House could lead important
discussions to seek consensus and build bridges across differences. But that
has become less true in recent years as extreme polarization has enveloped our
political institutions. The risk is that the only people who are now willing to
engage the administration’s comment process are those who are already
philosophically aligned with it. Instead of building consensus, it would ratify
a very distinct, narrow subset of views, values, and ethical preferences.

There are four ways to address this. First, the
administration should quickly establish this as a bipartisan effort in order to
proactively seek out and reflect more perspectives across the ideological
spectrum. Second, we can leverage the lessons from bioethics councils that have
debated complicated questions related to medical research and procedures in
which there was a conflict with competing values. Third, the effort needs to
get outside of the Beltway and Silicon Valley and engage communities throughout
the country. Finally, the effort should rely not only on technology experts but
also philosophers and faith leaders who wrestle with ethical questions.

Our governing institutions need modernization not only in
their technical capabilities to regulate emerging technologies but also in
their ability to facilitate proactive discussions around the ethical norms we
want these technologies to live by.

The post The challenge of encoding values in AI appeared first on American Enterprise Institute – AEI.