Evaluating World War II-era crisis innovation: My long-read Q&A with Daniel P. Gross

By James Pethokoukis & Daniel P. Gross

Crises
like World War II can give researchers the funding and public support necessary
to make great breakthroughs in R&D, many of which have applicability far beyond
the original crises. But are there trade-offs to this method of innovation? Do
we need to prioritize basic research before we can hope to see results from
crisis innovation efforts? Recently, Daniel Gross and I discussed what we can
learn from the World War II era and how it may not be as widely applicable of a
success story as many regard it to be.

Daniel is an assistant professor at Duke’s Fuqua School of Business and a faculty research fellow at the National Bureau of Economic Research. He’s the author of several papers examining innovation policy in the World War II era, the most recent of which is “Organizing Crisis Innovation: Lessons from World War II,” which he co-authored along with Bhaven Sampat.

What follows is a lightly edited transcript of our conversation, including portions that were cut from the original podcast. You can download the episode here, and don’t forget to subscribe to my podcast on Apple Podcasts or Stitcher, or download the podcast on Ricochet. Tell your friends, leave a review.

To
start, how important was science research to the allies in winning World War
II?

Great
question — this will take us back about 75 or 80 years. So World War II was really
one of the most acute emergencies in US history. It’s hard for us to imagine
now, but this was a time when a global hegemon was taking over continental
Europe and the US, in principle, was next. The US initially lagged behind the technological
frontier of modern warfare, so a handful of science administrators went to the
president and basically asked him to fund R&D in military technology.

What
came out of that effort in World War II turned out to be immensely important to
the war effort itself, and it also yielded multiple dual-use technologies that
found civilian uses in the post-war era. We recognize many of these discoveries
as being pretty standard today, things ranging from radar and mass-produced
penicillin to atomic fission.

Is
it generally assumed that these were crucial advances? Obviously, the one
advance we probably all know about is the atomic bomb, but the other advances
must have been important as well, right?

Yeah.
And what is also hard to understate is the breadth of this effort. It’s not
just the scale. It’s also the number of fields that were invested in and where
substantial progress was made during the 1940s. I only listed a handful of
examples, but if I jumped more in the medical direction, for example, the
research in World War II advanced our basic understanding of nutrition and how various
human stresses affect homeostasis, like what happens to the body at high
altitudes or in frigid temperatures.

The
progress that was made in everything from applied technology and digital
communications, electronics and radar, even all the way over to medicine and
surgical techniques — these are all things that we continue to use today.

You
mentioned applied research. I would imagine that, at this time, there was
considerable pressure on scientists and researchers to focus their work on what
would help the US win the war. They didn’t need research to win the next war in
20 years, and they didn’t need some sort of basic research that might eventually
produce something. So I imagine that there was a lot of pressure to keep the
research super focused on solving very practical wartime problems.

It absolutely was, and this is what makes crisis innovation problems distinct from innovation problems in regular times. Crisis innovation problems arise when there is already a crisis and the object of the R&D is actually a crisis solution. In contrast, the goal of regular-time innovation is typically developing basic science and improving standards of living over a longer horizon.

There
were a number of things that the Office of Scientific Research and Development
(OSRD) did during World War II to try and support this mission — everything
from engaging top contractors, scientists, and industrial R&D-performing
firms to parallelizing R&D efforts where there was uncertainty. For
example, in the race to develop an atomic weapon, the OSRD undertook multiple
approaches to uranium enrichment, not knowing which one would actually pan out
first. They closely coordinated with the military, both in priority settings
and in understanding what problems were the most important to try and resolve for
soldiers in the field. They even had influence in production and diffusion, helping
figure out what it would take to get technology into the hands of the people
who needed it the most.

As
you’ve already mentioned, there’s one part of that science effort that most people
are super aware of: the Manhattan Project. It’s also something policymakers are
very aware of — if there’s some big problem facing the US, it’s common for someone
to say “We need a Manhattan Project to solve it,” or “We need an Apollo
program to solve it.”

How
relevant are those examples of innovation today, whether you want to call it
crisis innovation or innovation more broadly? What can we learn from the
government’s role in innovation by using those two examples as models?

One
of the many things that has motivated my work on the World War II era is just
how frequently, in the past year especially, the World War II metaphor has been
invoked — a metaphor that, in my view, can be a bit overused. I think it’s
useful to be more specific in what it was about the setting, problem, and approach
in World War II that made it successful. Sure, we can reference the Manhattan
Project, but what I really think about is the broader wartime research effort
because it all ties together.

The control room of the K-25 facility in Oak Ridge, Tennessee, during WW2. Oak Ridge, sometimes referred to as the “Secret City” or “Atomic City”, was established in 1942 to develop materials for the Manhattan Project, the operation that developed the atomic bomb. Via REUTERS/U.S. Department of Energy/Ed Westcott

What made World War II research distinctive was, first of all, the urgency and time horizon. For example, while climate change is an urgent crisis today in the sense that if we don’t take measures now, consequences will come to bear in 20, 50, or 80 years, the threat isn’t necessarily imminent in the sense of it being a year or two out, or even a week or two out. The World War II impending crisis scenario is more similar to the ways in which the COVID pandemic has presented a truly imminent threat to wellbeing.

Second
of all, what’s different about the World War II period is that there was
primarily a single customer for the R&D, that being the US military and its
branches. Yet that single customer is actually part of what made such great
developments possible. You had military advisors and liaisons sitting on
research committees to help identify and come up with specific research
proposals while also being involved in the translation efforts from bench to
battlefield.

Now,
in the case of the pandemic, that kind of model might also apply to coordination
between the scientists, the science-funding agencies, and, say, hospitals or
county public health departments, as they are actually on the front lines
fighting this battle. For something like climate change, this relationship is
harder to facilitate because the actual user-base is a bit more diffused, meaning
it’s a bit more difficult to apply this “command-and-control” funding approach.

For
sure. With climate change, you have a situation where, first of all, not
everyone agrees it’s a crisis. And even if you think it’s a crisis, it’s a
crisis where the worst effects may be seen in decades. But I certainly hear the
Manhattan Project example used all the time, even for people who are skeptical
about industrial policy or government funding of applied research. They’ll say,
“Well, maybe climate change is the one exception, because it really seems
like a crisis.”

So,
in what ways can we apply lessons from the Manhattan Project to climate change?
And in what way should we ignore those lessons?

Well,
one of the first lessons that I come away with is just how valuable investments
in scientific infrastructure in ordinary times can be when a crisis arrives. Already
having research going in any area, whether it be basic science and progress or fundamental
understandings of natural phenomenon, is crucial in times of crisis.

During
the pandemic, for example, our prior understanding of messenger RNA was pivotal
in vaccine development and vaccine production. It was also crucial in our
pandemic response to have a highly trained technical and scientific workforce
already created, not to mention adaptive scientific institutions that included
universities and R&D-performing firms.

What’s
difficult is deciding where to invest in scientific infrastructure in normal
times. We have to determine what fields are of strategic importance and to what
degree we want to try and select areas to focus on in preparation for the next
crisis. Because that question borders on industrial policy and picking “winners”
and “losers” in the technology space, it becomes a bit of a tenuous question —
one I’m not in a position to advise on. Perhaps the best approach would be to
solicit feedback from the country’s technological experts and premier scientists
and engineers. Honestly, the question of what investments we should make in
ordinary times to prepare for the next crisis is not one I quite know how to
answer.  All I can say is that, from what
I’ve learned from the past, advanced work eventually proves to be immensely
valuable.

Right. Surely in World War II, they were drawing upon a deep reservoir of basic research originally done when it would have been impossible to predict how it might play out decades later in a global war.

You
mentioned mRNA with these vaccines — that wasn’t something we could’ve made up
on the spot. There were techniques already out there that we drew upon, refined,
and turned into actual products that now we jab into people’s arms. So you need
to have that reservoir.

One
of my concerns is that we’ve gotten so confident in thinking that we know where
funding should go — whether it be to vaccines, AI, or clean energy technology —
that we’re going to forget about basic research. Basic research seems a bit off-point
right now when we’ve got so many “obvious” problems to deal with.

I agree, there are many risks in that thinking. For one, we could forget about basic research entirely. Or we might divert resources away from other fields that might yet hold promise, which would also be detrimental. One of the questions that I reflect on in the World War II context — and also think about today as pharmaceutical research is diverted from problems of nevertheless long-standing importance to focus more on COVID — is what might’ve been left behind?

Let’s
just take the World War II context for a moment. You had the scientific
establishment — say, the country’s physicists — shifting from whatever work
they’re doing before to instead focus their energies on atomic fission and
radar. While we might celebrate that effort because it’s easy to see what we
got from it, it’s difficult to know, and easy to overlook, what we might have
also left behind.

That’s
one of the questions that I’m continuing to explore in my research: What
actually might’ve been crowded-out as the research and technological
development we now celebrate was crowded-in?

What do you feel
confident saying about how federal R&D funding crowds out private funding?
How does that interaction work?

That’s
a great question. There is certainly work on this, but it’s a difficult
question to answer because ultimately counterfactuals are hard to observe.
However, by using modern statistical methods we can try to approximate them.

My recollection of the literature here is that there’s mixed evidence, but there are certainly cases where public investment attracts and complements private investment. That’s not to mention the many other settings where public investment seeds basic research which can then be commercialized, whether through technology transfer, universities that rely on public funding, the private sector, or spin-outs. The complementarities vary a bit from setting to setting.

Via Twenty20

It’s
not really the kind of question I want to take a hard line on and say,
“There is a single-parameter answer to this.” There’s not a single
number we can point to and say, “Here’s the degree of crowd-in or crowd-out.”
Rather, we can say it varies across settings.

I
think what I ultimately want an audience to be mindful of is just that there
can be crowd-outs. It’s actually very easy to overlook that, especially when we
think about some of these crisis moments and how productive we were in actually
meeting the moment, like we have been with vaccines today.

Certainly.
I think vaccine success, combined with peoples’ beliefs that China’s fast
economic growth has come from their five- and 10-year plans for key sectors of
funding, have made some think, “Well, maybe policymakers can do it. They can
pick technologies, devote funding to them, and get great results.” But
again, innovation in crisis is different than innovation in other areas where you
have a specific problem you’re trying to address. Also, in crisis innovation you
have funding freed up in ways you wouldn’t see otherwise.

There are a lot of ideas right now about spending more on basic research. We’ve had Jonathan Gruber on the podcast who has a very expensive plan — about a trillion dollars over 10 years — to try and create technology hubs around the country. What do you think of that idea? Would moving innovation away from the coasts — from tech hubs like Austin, Boston, and Silicon Valley — to create more top-down science hubs across America be successful?

I’ve
spoken with Jon and his coauthor Simon Johnson several times, and I very much
appreciate how they’re pushing these ideas. But being a younger scholar, and
also being a bit more steeped in the World War II era, my perspective isn’t
necessarily to push a particular position on this but rather to highlight some
of the trade-offs.

I
think this conversation again requires going back 75 or 80 years to World War
II. To meet the challenge of the moment, the OSRD really emphasized funding the
best scientists, institutions, and firms available. Its goal in that crisis was
to deliver the best results it could as quickly as possible. And as a result,
much of that era’s R&D funding ended up concentrating in specific locations,
with different technologies being centered in different locations.

That ultimately led to policy and political debates in the 1940s when Harley Kilgore, a senator from West Virginia, raised concerns that the OSRD wasn’t actually funding as wide a range of R&D performers as was available, which was ultimately resulting in squandered opportunity. This debate continued after the war, especially after Vannevar Bush, who directed the OSRD, published the seminal report “Science of the Endless Frontier,” which proposed that the National Research Foundation fund basic research in peacetime. So that debate was, do we want to fund the best scientists, or do we want to ensure that scientific research is being widely supported?

Now,
those two choices inevitably present trade-offs. I can tell you from my own
work that the OSRD’s R&D investments ultimately seeded technology hubs in
the different places where it operated, which had long-run effects on both the
direction and also the location of US invention — and downstream from that, effects
on entrepreneurship and employment in high-tech industries. It even appears as
though those effects have played out all the way through today. That’s all to say
that the questions raised by Kilgore and Bush in the 1940s were real, and
they’re still debated today.

Certainly,
in a crisis it might seem like a natural choice to fund the most productive
scientists and institutions in the most productive places, like Silicon Valley
or Route 128. But in normal times, objectives might be broader than even
typical place-based policy. For example, policy might not just be interested in
lifting more boats with the rising tide, but could also be aimed at a certain strategic
value. To reference what I said earlier, policy might have a goal of investing
in scientific infrastructure to grow the number of regions that are R&D
hubs, not only for national competitors, but also in preparation for the next
time that the scientific establishment needs to solve a crisis.

So
pulling that all together, I can see how this moves us a bit away from the
positive realm and more into the normative realm. Here, I can better make a
case for a Gruber-and-Johnson type of policy, where an objective is to invest
in more regions and for R&D funding to be more evenly distributed.

For
people who aren’t familiar with Gruber and Johnson’s plan, regions would
compete for funding that would be allocated based on a variety of criteria. One
reason for doing that is, if you want to more broadly spend a lot of money on
R&D, you have to justify it. And if it’s being spent in a lot of states or
in a lot of congressional districts, it’s much easier to gather the political
support to make that kind of thing sustainable.

It
may seem like that’s just politics, but at the end of the day you have to sell
a plan on a national level. I don’t think there’s anything wrong with trying to
consider how to do that.
And if that means more money might be spent in Keokuk, Iowa
than there would be otherwise, so be it.

True.
We haven’t even touched on the political economy of science funding, which is a
great and intriguing question. What’s interesting about, say, the Cold War era
and the moonshot, is that Sputnik galvanized public support in some of the same
ways that World War II did. Because there was this perception that US
technological supremacy was threatened by the Soviet Union — whether that was
actually true is a different question— public support for funding science and
technology drastically increased. In turn, that really lubricated the political
support for increasing NSF budgets or just general government spending on
R&D.

So
the question then becomes, what does it take? Does it take a crisis to
galvanize that kind of public support?

Knowing
that there’s increasing interest for the government to spend more on innovation
research — whether it’s on basic, applied, industrial policy, or perhaps even research
based on what we learned in World War II —
what would you have policymakers keep in mind
when they’re thinking about spending more on R&D?

I
would advise that they seek input from the country’s leading scientists and
engineers. There are also organizations that are channels for this kind of
feedback — for example, the National Academies. But given how difficult it
actually is to see the future and pick “winners” and “losers,” we want to be
careful with any kind of policy that trends in that direction. Essentially,
aggregating input from a number of sources is always a valuable endeavor.

My
guest has been Daniel Gross. Dan, thanks for coming on the podcast.

Thanks Jim, it’s been great to be here!

James Pethokoukis is the Dewitt Wallace Fellow at the American Enterprise Institute, where he writes and edits the AEIdeas blog and hosts a weekly podcast, “Political Economy with James Pethokoukis.” Daniel is an assistant professor at Duke’s Fuqua School of Business, as well as a faculty research fellow at the National Bureau of Economic Research.

The post Evaluating World War II-era crisis innovation: My long-read Q&A with Daniel P. Gross appeared first on American Enterprise Institute – AEI.