5 questions for Melanie Mitchell on the challenges of artificial intelligence

By James Pethokoukis and Melanie Mitchell

Artificial intelligence has come a long way in the past half century. But what do we really mean by words like “intelligence” and “thinking” when we apply them to machines? And how should we think about the promises (and worries) of game-changing AI technology in light of past predictions that have yet to come to fruition? To answer those questions, I spoke with Melanie Mitchell on a recent episode of “Political Economy.”

Melanie is the Davis Professor at the Santa Fe Institute, a non-profit research center for complex systems science. She is the author of six books, her latest being “Artificial Intelligence: A Guide for Thinking Humans,” released in 2019. In 2021, Melanie authored “Why AI is Harder Than We Think,” which describes the fallacies that underlie overly optimistic AI predictions.

Below is an abbreviated transcript of our conversation. You can read our full discussion here. You can also subscribe to my podcast on Apple Podcasts or Stitcher, or download the podcast on Ricochet.

Pethokoukis: When we talk about AI today, what are we talking about compared to what we were talking about 20 years ago?

Mitchell:
The term AI has changed its meaning throughout its history. It started out very
much trying to use logic and logic-like deductive inference as the way to model
intelligence. Symbolic AI be was really the first big push in AI trying to
capture intelligence by explicitly programming logical abilities into machines.
And that kind of failed. And so that’s when people started building these
so-called expert systems where they would go out and interview experts in a
field, try to get all the rules that this human expert used to perform the task
of diagnosis or whatever the task is, and then program those rules into a
computer. Those also failed to a large degree because it turns out that a lot
of the actual rules that experts use, or the knowledge that they use, is not
conscious. And then neural networks, which simulated in a very rough sense the
way the brain works with simulated neurons and simulated connections between
the neurons, became popular. And these were systems that you didn’t program.
They learned from data, from being exposed to data.

Chinese Go superstar Ke Jie plays a Go game with a robot arm operated by an AI program in Fuzhou city, southeast China’s Fujian province, 27 April 2018. Via REUTERS

You hear a lot about these ups and downs in AI optimism and
people call them “springs” and “winters.” What is the
season we’re in right now?

We’re
now in an AI spring. So the idea with that is, it measures how optimistic
people are, how much funding there is, the predictions people are making about
near-term artificial intelligent cars and robots and so on. Often these AI
springs are followed by AI winters where the promises that people are making
like, “You don’t have to get a driver’s license anymore because you’ll be
driving around in a self-driving car” — those don’t happen. The promises
are not fulfilled, and the funding dries up and people become disappointed and
think, “Okay, AI doesn’t work.” So we get these cycles where there’s
some new technology that people use that has a lot of promise and people often
overpromise its applications, and there’s a lot of optimism until suddenly
people become disappointed. And then AI winter happens. So there’s a lot of
debate now over whether we’re going to have an AI winter after this very
exuberant AI spring.

In “Why AI is Harder Than We Think” you address
four fallacies that can make AI seem easier than it really is. One of those fallacies
is “easy things are easy, and hard things are hard.” Could you
explain what you mean by that?

There
are certain tasks that we humans think of as very hard and take a lot of
intelligence. And one example might be playing chess at a grandmaster level. We
deify these chess players who can play chess and we think of that as requiring
a huge amount of intelligence. And yet it turns out that the game chess is much
easier for computers than a game like tag that you might play on a playground.
Because robots have trouble navigating, they have trouble often tracking where
people are. They have trouble predicting their movements and so on. The easiest
game for a four-year-old child turns out to be much harder than the hardest
game for a human. So this is this idea that things that are easy for us often
are hard for computers. And so if a computer does something that’s really hard
for us, we assume it’s going to be able to do all the things that are easy for
us, but that’s actually not the case at all.

Another fallacy you address in the paper is the allure of
wishful mnemonics. What does that mean?

We
say “machine learning,” and we anthropomorphize that term and say
it’s similar to human learning. And yet it’s really different from human
learning. Because for one thing, if you a child learns something, then you
assume they’ll be able to apply that knowledge in other contexts than where
they learned it. If they’ve only seen dogs outside and they learn what a dog
is, they can still recognize a dog when it’s inside. This is not necessarily
the case for machine learning. This is one example of a wishful mnemonic where
it’s just a term that we use to describe something in machine intelligence that
also applies to human intelligence and we assume that the meaning carries over
from one to the other. Another example is neural networks. We talk about neural
networks as being like the brain, they’re actually quite different from the
brain, but that term “neural” sometimes gives people the impression
that they’re more like the brain than they are.

Does super optimism about AI go hand-in-hand with
concerns? Elon Musk is optimistic, but he also has a lot of scary stories to tell.

I would say that there are some people who are very optimistic of that in the sense that they believe we’re on the brink of creating true AI in some sense. And a lot of people who believe that are also worried about AI systems not having the same values as we do. And they talk about alignment of AI’s values with us. So there’s a group called the AI alignment movement. There’s also people who are like me, not as sanguine that we’re going to get to full artificial intelligence anytime soon, and yet still fear are some of the current issues in AI that come up like bias in these machine learning systems and the fact that some of the systems that are being granted autonomy really aren’t smart enough to have that kind of autonomy. So sort of the opposite of the alignment people. It’s not that they’re too smart, it’s that they’re not smart enough.

The post 5 questions for Melanie Mitchell on the challenges of artificial intelligence appeared first on American Enterprise Institute – AEI.