The End of Moore’s Law and the Future of Computers: My Long-Read Q&A with Neil Thompson

By James Pethokoukis and Neil
Thompson

Moore’s
law, which states that the number of transistors on a microchip doubles every
two years, has fueled rapid computing gains since the mid-20th century. But
will this law last forever? Today’s guest, Neil Thompson, thinks its end is
near. I’ve invited Neil on the podcast to explain why Moore’s Law may be coming
to an end and what that means for productivity growth and continued innovation.

Neil
is an innovation scholar in MIT’s Computer Science and Artificial Intelligence
Laboratory, a research scientist at the MIT Initiative on the Digital Economy,
and an associate member of the Broad Institute.

What follows is a lightly edited transcript of our conversation, including brief portions that were cut from the original podcast. You can download the episode here, and don’t forget to subscribe to my podcast on iTunes or Stitcher. Tell your friends, leave a review.

Pethokoukis: Let’s start with a basic question, and then maybe a somewhat harder
question: What is Moore’s Law, and what does the world look like today if it
didn’t exist?

Thompson: So Moore’s Law is an incredibly important trend, which is sort of used to talk about broadly all of the improvement in computing we’ve had in the last five or six decades. The actual origins of it, though, come from basically the miniaturization of computer chips and the elements that are on computer chips. And it was kind of a neat moment where Richard Feynman, the Nobel Prize-winning physicist, gave this speech back in 1959. By the way, this was an after-dinner speech, if you believe it. This was like, you’re at a conference and someone was just saying, “Hey, by the way, why don’t you say a few remarks?”

That’s great.

And
in this talk, he says, “Okay, let me tell you about nanotechnology.
Because we’re doing all these things in this sort of regular world, and I think
we could just keep miniaturizing stuff all the way down to the point where it’s
just a few atoms.” And he said that about computers, right?

He
said, “We can make them and these parts will be really small, like 10 or a 100
atoms big.” And this was a pretty remarkable thing that he said. It’s a little
hard, I think, to get the sense of how big the difference is between our everyday
lives and that many atoms. But it’s roughly proportional to, if you were building
things the size of the Earth, and you’re like, “You know what? I think we
could build that the size of a tennis ball.” And that really laid out a
roadmap for us as we improved our technology to say, “We could keep
miniaturizing.” And it turned out as we did that, we were able to put more
transistors on our chip, which meant we could do more and we could run our
chips faster.

And
a huge amount of the revolution we’ve had in IT all really comes from this. If
we were still using the computers back in those early days, we really could not
be doing almost anything that we’re doing today. So it has had an enormous effect
on society. And yeah, it has really been very transformational.

I’m old enough that I’m pretty
sure I may have watched some sort of film strip or something in school about
computers, and they were still pretty big. So this is probably like the ‘70s. It
may have been in black and white, so maybe it wasn’t state of the art. But it
was classic: giant, monolithic computers with punch cards and things like that.
So maybe we’d still be in that world. I don’t know.

I
remember those moments too. My mother used to do some of her research with a
computer and she’d come back with extra punch cards when they were getting rid
of the computer, and we had these things as our notepads.

That’s great. If I did a news
search on Moore’s Law and I searched with a phrase like “coming to an end,”
I would find many stories over many years about the end of Moore’s Law. Have
all those reports of its death been greatly exaggerated, or are we finally
there?

Yes,
you’re absolutely right. You can go back decades and find people saying,
“Well, this is going to be a problem,” or “This is going to be a
problem.” And actually, it’s a real credit to the technologists and
engineers that they were able to push through that and get past it. But since 2004,
what is clear is we’ve lost many of the benefits of Moore’s Law.

Let me make the distinction here, because when Gordon Moore actually made up this law, it was really about the number of transistors you could fit on a chip, which is a very specific, technical thing about what you can do. But what it translated to, generally, was this incredible speed up in the capacity of our chips and how fast we could run them. And so we sort have taken to calling all of those things Moore’s Law. But in practice, actually in 2004–2005, we lost, I think, one of the most important parts of that, which is the speed up on our chips.

U.S. President George W. Bush presents Gordon Moore with the Presidential Medal of Freedom at the White House, July 9, 2002. REUTERS/Hyungwon Kang HK

So
chips at that point were about 3 gigahertz. The chips in your computer today
are about 3 gigahertz. And so we’ve really plateaued, whereas before that we
were improving it exponentially. So we really have lost a lot of it already.
It’s also very clear that we are pretty close to the end, even being able to
get more transistors on. And that’s not just me saying that. The people who
designed the roadmaps to figure out, “What are the technologies that we need to
put together in order to make the next generation of Moore’s Law happen?—those
folks have already said, “Whoa, whoa, whoa, that can’t be the path
anymore.” So it’s no longer just lone voices in the cold saying it’s going
to end. Now, it’s a lot of the community.

Speed and the number of transistors
on a chip: I think those are two different things. How are they different?

I
think a useful way to think about this is to think about that miniaturization that
Feynman had proposed. Every time you shrink those transistors, you can fit more
on a chip. And that’s just geometry, right? We know that if we’ve sort of taken
up less space, you can fit more on. So that’s really good. And that allows you
to have things like more memory, for example, on your chip. And so if you think
about cache—that may be something people see when they’re buying their
computers—the size of cache has been going up a lot over time. So that’s really
good.

But
the speed of your computer is also based on how many operations you can do in
one second. So you do one thing, and then you do the next thing, and you do the
next thing. And one of the things that modern computers are really good at is
doing things way, way faster than we can. So if you think about our clock being
one second, they can do three billion things in that one second. Three billions
sets of operations. It’s enormously fast, but it turns out that also came from
the miniaturization that was happening with Moore’s Law.

The
limit to running chips faster is always that as you run them faster, they
produce more heat. And the problem is if you produce a lot of heat, you’ll eventually
melt the chip. So there’s a limit to how fast you can sort of toggle it up to
be able to run it faster. And you hit that limit and then you stop. And the
nice thing is, as we miniaturized, it turns out we were able to run them
faster, right? They produced less heat because they were smaller. And so you
could turn up the speed a little bit and you just kept being able to do that
over and over again until we hit this limit in 2004 and 2005.

I write a lot about the down shift
in productivity growth in the United States and in other countries starting in
the ‘70s. And then we had this kind of blip up in the late ‘90s and early 2000s.
I think this down shift—and people can debate if productivity is being properly
measured—happened at the same time as this amazing improvement in chip performance
and the capabilities of computers. So first, do we know to what extent Moore’s
Law contributed to productivity growth over the past half century? Do you
happen to know if there’s a rough estimate? I’m kind of asking you to do this
off the top of your head, so I apologize.

I
think that there are sort of two ways to answer that question. There’s the sort
of overall IT—what macroeconomists say the effect that IT is having on
productivity—and by one estimate, since the 1970s, about a third of all labor
productivity improvements have come from improvements in IT. That sort of gives
you sort of one version of this.

Right.

But
it’s very hard in a lot of ways to measure it, because Moore’s Law has been a
pretty stable thing over time, and that tends to be a hard thing for economists
to measure. But in some of the work that I did as part of my dissertation work,
I can actually show, for example, that in this 2004–2005 era where we lose the
speed up, it turns out a whole bunch of firms are hurt by that. Their
productivities do not rise as fast as a result of this. And so we definitely
can see these effects coming in. And myself and my group, the stuff that we
work on, a lot of this is trying to get a much better estimate of this because
we think this is actually a pretty crucial question. Because we may have
actually been systematically underestimating the effect that IT has had and
Moore’s Law has had on the economy.

Given the fact that we may have
underestimated it, how concerned should I be going forward? If we’ve had this
struggle of measured productivity growth, at least as currently measured, with Moore’s Law, how concerned should I
be about productivity growth—which is pretty important for raising living standards
going forward—with Moore’s Law at an end?

I am worried about this. I am absolutely worried about this. And the way I think of this is: If you look across society and you think of general productivity improvements, you’re talking 1 or 2 percent per year kind of as how we make things better over time. And at its peak, the improvement that we were making in our chips was 52 percent per year. So it was vastly faster. And that really had sort of spillover effects on everybody else. Everyone else could say, “Let me use computers to do this additional thing and make my part of the economy more productive as well.” And Dale Jorgenson at Harvard has done some nice work splitting that out and seeing how important it is.

Quantum computer startup QuEra Computing’s 256-qubit machine is pictured in Boston, Massachusetts, U.S., on February 18, 2021. Alexei Bylinskii – QuEra/Handout via REUTERS

And
then as we get to the end, the question is, where do we go from here if this
engine has been slowing down? And of course, there are some candidates. People
talk about artificial intelligence: Are we going to be able to use that?
Quantum computing: are we going to be able to use that? And some of these other
things. And the question is, do these things have the legs in the same way that
Moore’s Law has had. And I think it’s not at all clear that any of them will be
able to sort of take up that mantle in the way that Moore’s Law has done,
particularly over so many decades.

Computers start out at the very
beginning as a very specific technology developed for use in war—I’m sure you
know a lot more about this than I did—whether it’s calculating artillery range
or something. Then it became what economists call a general-purpose
technology
. You’ve written that it’s now transitioning back into a more
specific technology where computers and chips become more special purpose. Do I
have that right?

Yeah,
that’s right. That’s right. So maybe a way to think about this is a Swiss Army
knife versus having a whole toolkit full of hammers and screwdrivers and all
those kind of things. So you could say, “I’m trying to make this decision,
should I invest in a Swiss Army knife or should I buy a screwdriver and all
those kind of things?”

I
think in my toolbox and I’m sure in yours, we have the full set of things,
right? We don’t just have this Swiss Army knife. And the reason that’s true is
because, obviously if you specialize, your tool for one particular thing does
that thing better. But the remarkable thing about what we got with these sort
of general-purpose chips, which is the CPUs, over time was they got better so
fast that the choice was more, “Well, do I want to buy a screwdriver today
or do I want to buy a new Swiss Army knife four years from now that’s going to
be vastly, vastly better?” And so we sort of kept on that path.

But
it only works, that trade-off is only worth doing, as long as the Swiss Army
knife is getting better fast enough: as long as the CPU is getting better fast
enough. And what we’ve seen is a real breakdown in that. That is, as this slows
down, we’ve gone from that 52 percent that I had told you before as the rate of
growth. By one measure, we’re now down to 3.5 percent per year improvement. And
so at that level, you’re much better off saying, “I want to get the screwdriver
myself.” And we already see lots and lots of firms doing this. So Google is
building their own chips. Amazon is building their own chips. Tesla is building
their own chips. Lots of people are going down this road to build a specialized
chip that is right for exactly what they want to do, not the one that we’re all
on the same platform for.

You mentioned AI earlier. When you
talk about AI, what are we talking about? Machine learning, deep learning? And is
this world of more specialized chips just fine? If your specialty is deep
learning, you have a deep learning chip. I’m probably wildly oversimplifying
this. So does that have a negative impact? What do you mean by AI? And then
what is the impact of a more specialized chip?

I
think you’re so right to point us out. Let’s talk about what we mean by AI, because
people mean everything from artificial general intelligence—which obviously has
a whole bunch of particular implications—to I would say sort of a catch-all
phrase where almost everything from data science these days gets wrapped in
this blanket of artificial intelligence. Certainly within the last 10 years,
the thing that has really changed remarkably is deep learning in particular.

And
for those in your audience who haven’t seen this, this used to be called neural
networks. This has existed back since the ‘50s, and it was just that when we
didn’t have much computing power, they were kind of small and they didn’t have
very many layers, which is sort of one of the ways that people think about
them. And as we got more computing power, they got deeper, which is why we call
them deep neural networks now. So that’s sort of how we get there. That’s
really where the revolution has been in the last 10 years. And at the cutting
edge, all of the models that are beating records and stuff are almost all deep
learning models.

And does the slowing of Moore’s Law
and the moving to more specialized chips have any impact on the progress of
deep learning AI?

Yeah.
So I mean, what you can see is that when you have something like deep learning
where it’s clear that there’s potential here, people have invested in building
these specialized chips. And so people may have heard of GPUs, graphics
processing units. But people have built even more specialized [chips]. So
Google has their own tensor processing unit, TPU.

So
these sort of ever-more-specialized things are sort of taking us down this road
of becoming more and more efficient at that particular thing. And so the good
news about that is that, indeed, you get a big performance gain. So one of the
results that Nvidia pointed out not too long ago was that they could get about
a 100x improvement from the specialization that they were doing. So that’s on
one hand, pretty great.

NVIDIA computer graphic cards are shown for sale at a retail store in San Marcos, California, U.S. August 14, 2018. REUTERS/Mike Blake

On
the other hand, that’s not at all big compared to the sweep of Moore’s Law, right?
And so you were asking, should I not be worried? And I think the answer is you
should be worried because specialization gives you that one-time gain—and maybe
incremental. Maybe you can build a screwdriver ever a little bit better, but
you run out of steam pretty quickly. Whereas Moore’s Law had much, much more
legs and many more decades of improvement that it could offer.

Do people think there’s a lot more
progress to be made in deep learning? Or do they think it’s being
exhausted already, and they’re trying to think of the next thing?
Where are we in that revolution?

I
think there are people in both camps on that. I think that there are lots of
folks that are still excited by all the progress we’re making here, and I think
they have some real things they can point to. The one that I’m particularly
excited about recently is AlphaFold, which is this ability to model protein
folding with deep learning. And that’s a remarkable achievement, right? It’s a
problem that we struggled with for a long time. And it has a lot of benefits
for how we do medicine in the future. So I think there’s a real promise there.
And so they can say that’s great, lots to be done there.

At
the same time, we do see us starting to run into limits on how we implement
these things. We can run into times where the inherent inefficiency of deep
learning—which we can talk about if you’re interested—that inherent inefficiency
that comes with using deep learning comes with a very high computational price.
And so you see people starting to say, “Well, do I really want to pay that
price?” So one example that we saw very closely: we did an MBA case on
this where a supermarket had said, “I’m really interested in using deep
learning to predict the demand for my products.” And of course, that
matters a lot for a supermarket because they’d want to know how much to stock
on the shelves and the like.

And
it turns out that they did it; they got a real improvement in performance. But
for many of the products that they put on their shelves, it was not
economically worth it to run that model. The computational cost of just running
it overshadowed that. I think we’re seeing both of them and we’re seeing the
second group of folks saying, “Well, maybe I’ll use deep learning in some
places, but in other places I won’t, or maybe in some places I’ll try to adapt
it so that it can become more efficient.”

If we’re at the end of Moore’s Law
as we’ve thought about it and now we’re moving to specialized chips, is there a
new sort of chip technology that would return us to these massive gains and return
us to a general-purpose technology?

None
that we know yet. There are candidates.

I’m sorry to hear that.

Yeah,
me too. Me too. There are candidates, right? People have proposed architectural
changes in the way we do our switches that might make things more efficient. I
think there’s some possibility there. Although, again, probably not the decades
and decades of Moore’s Law. People have talked about things like optical
computing. So right now we have wires and electrons doing our calculations.
Maybe we could use light and photons to do it. That seems like that might be
interesting, particularly for some kinds of calculations. Then there are other
things like quantum computing, which I think many people have the sense of
quantum computing is just the next generation of computing.

I
actually don’t think that’s right. I think it’s more like a different kind of
specializer. So it’s like, we’re going to have our main computers and we’re
still going to have them in the future. But then on the side, we’ll say, “Well,
for a certain subset of problems, quantum computers can do really well, so we
use the quantum computers for those.” So I think that’s what the landscape
looks like right now. But as I say, I don’t think there are any of these that
really look like they’re going to be the next general-purpose technology, for
the moment.

I can’t remember a time when computer
chips have been in the news and as much on the national news pages. There are
concerns that not enough chips are made in the United States or not the best
kind; they’re made in Taiwan, but what if China invades Taiwan? How hard is it
to say, “Oh, now we want to make these things in the United States”?

Workers manufacture LED chips at the plant of Tsinghua Tongfang in Nantong city, east China’s Jiangsu province, 26 December 2011. China is propping up its local chip manufacturing industry with new policies and financial support intended to turn the country into a semiconductor-making powerhouse by 2030. Via REUTERS

I think some people think you take
apart a factory in one place, you move it to the United States. Would that be a
very significant change, if companies tried to make more of their chips here?
And does that take a number of years? It sounds like it would be hard, harder
than I think many politicians think.

It
really depends on how cutting edge you want your chips to be. So I was talking
before about that miniaturization that’s going on. And so the smaller you get,
the harder it gets, the closer you’re getting to moving around individual atoms
and things like that. And so if you want to be away from the cutting edge,
actually, there are lots of places that can build that. And so that is more
broadly available technology.

Neil, I want America to be on the
cutting edge. I want us to be on the cutting edge.

Well,
yes. I don’t blame you.

I don’t want a chip that runs a
toy. I want the best.

The
problem with that is building one of those factories these days costs $20
billion or $22 billion. So it’s a big deal. It’s really hard. You need very
cutting-edge equipment to do it. The challenge there is that we probably did
not worry about this as much in the past because there used to be 25 different
companies, all of whom were on the frontier of building chips. And as these
factories have become more and more expensive over time, what we’re down to is
now basically three different companies that produce these cutting-edge chips.
So it is very hard. There are not very many folks that do it at the cutting
edge, but it certainly is important that we have good production facilities and
that we know that they can be secure. Absolutely.

Neil, thanks for coming on the
podcast.

My
pleasure. Thanks for having me.

James Pethokoukis is the Dewitt Wallace Fellow at the American Enterprise Institute, where he writes and edits the AEIdeas blog and hosts a weekly podcast, “Political Economy with James Pethokoukis.” Neil Thompson is an innovation scholar in MIT’s Computer Science and Artificial Intelligence Laboratory, a research scientist at the MIT Initiative on the Digital Economy, and an associate member of the Broad Institute.

The post The End of Moore’s Law and the Future of Computers: My Long-Read Q&A with Neil Thompson appeared first on American Enterprise Institute – AEI.