The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.
Paul Romer once considered himself the most optimistic economist. He rightfully predicted that technology would blow up as an economic driver coming out of the inflation of the 1970s but acknowledges he did not foresee the inequality that technology advances would lead to.
On this episode of the Me, Myself, and AI podcast, Paul shares his views on AI advances and their implications for society. Rather than pave the way for full automation, he is a proponent of keeping humans in the loop and believes that, rather than slowing down technology, it can be pointed in a direction for more meaningful and beneficial use, citing education as an area ripe to benefit from AI.
Subscribe to Me, Myself, and AI on Apple Podcasts or Spotify.
Transcript
Shervin Khodabandeh: What does a Nobel Prize-winning economist think the future of AI holds? Find out on today’s episode.
Paul Romer: I’m economist Paul Romer, and you are listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of analytics at Boston College. I’m also the AI and business strategy guest editor at MIT Sloan Management Review.
Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities, and really transform the way organizations operate.
Sam Ransbotham: Hi, everyone. Today, Shervin and I have a fun treat for you. We’re joined by Paul Romer, winner of the 2018 Nobel Prize in economics. He’s a professor at Boston College and directs its new Center for the Economics of Ideas. Paul has a fascinating background, including time as the chief economist at the World Bank, founder of multiple companies, and avid contributor to public policy discussion everywhere. He’s really at the forefront of thinking about how the economics of ideas differs from the traditional economics of scarce physical objects. Beyond that, he is an interesting and nice person, so we’re thrilled to have him with us today. Thanks for taking the time to talk with us.
Paul Romer: Well, I’m glad to be here. It’s always fun to talk.
Sam Ransbotham: All right, let’s start off with the elephant in the room. There is a whole lot of talk about artificial intelligence right now. If we think about the spectrum of technologies and how they affect society, we might have the wheel and fire and electricity and the steam engine on one side and maybe transistors over there. And on the other hand, we have something like Segways and Clippy. Where are we in this spectrum with artificial intelligence, do you think?
Paul Romer: Let me start by repeating something I said in class yesterday, because my students are very excited about AI. I’m a little more skeptical. I think there’s a lot that will be possible because of AI, but I think people are buying a little bit too much of the hype and they’re losing perspective. I told them that I don’t think AI is actually the big revolution that we’re living through right now. The real revolution, I think, is stuff we’re seeing coming out of the war in Ukraine. The nature of that revolution is that sensors and devices are giving people the ability to do things that they could never do before.
The problem with the way most people are framing AI is they’re thinking about autonomous vehicles where you’re taking the human out of the loop; the technology is the replacement for the human. That is not working that well. The autonomous vehicles were supposed to be the killer application of AI, and it’s turning out to be a bust. There’s a reason Apple just canceled its car project.
But if you go back to using technology to enhance what people can do — you have an interface that lets a human interact with data from a lot of sensors and software that then translates what the human does in the interface into things that are happening out in the world — incredible things are going to come out of that. But this kind of just turn it over to the machine and let go, I think that’s going to turn out to be very disappointing.
Part of the way I tried to explain to the students how big this revolution is: If you look at an organization like the Marines, the Marines recognized a while ago [that] the tank is just history. The weapons like the tank are not going to be important going forward. And they’re thinking about things like electric motorcycles and portable missiles. The Air Force just canceled a big project that they had where they were building helicopters. They said the same thing; the helicopter is not going to work in this new world of sensors and devices.
So very big things are happening. I still think that this idea of just train the machine, use unsupervised learning, and it just gets really smart and does things for us [is not] where the action is going to be.
Shervin Khodabandeh: I agree with you, Paul, that today you can’t hand it over to the machine and expect it to take over. And I do believe that we humans have this amazing ability to make everything completely black and white. It’s either AI or nothing, or human or nothing. Totally aligned with you there. You also made the point that there is a real revolution going on, which is the proliferation of information and data and just our ability in instrumentation … and measurements. Where I might disagree with you is when Sam was making a parallel to other technologies [like] the wheel and the electric motor. All those things were disruptions and changed the way of working. They changed how humans worked in a factory. The electric motor changed how people worked, whether [they were] all aligned with one main shaft or in departments or compartments. What I would say is AI has begun to do that. And the reason I believe that is if I look at things that were uniquely human, [our] cognitive abilities — whether it’s vision or it’s logic or creating images and things like that — we’re seeing evidence that AI is doing those.
We’re also seeing evidence that AI is doing them now much faster than it used to do before. So whereas it maybe took 10 years for computer vision to surpass human ability, it took [AI] maybe one year or two. Of course, it built on decades of research, but since its introduction, it maybe took a couple of years for AI to be able to summarize books and text or create images and content. So I guess where I would debate what you’re saying is, don’t you think that if you play forward this evolution, maybe another three or five or 10 years, there will be a world where AI will do much more of the things that humans had a monopoly on?
Paul Romer: Yeah. So to recap — and it’s good to disagree. For academics, this is how we make our living. You disagree with people, and it’s the conversation [that] converges toward some notion of truth. So I’m arguing the way it’s always been and, in effect, the way it will always be, is that you keep the human in the loop and the tools make the human more powerful. Whereas you’re saying no, we’re starting to see demands where you don’t even need the human in the loop — the machines take over.
Shervin Khodabandeh: No, actually, I would agree with you that we always need a human in the loop. But I also would say that we need a lot less of a human in the loop, therefore that human could do a lot more other things, whatever those things might be. But I actually agree with you that you cannot have AI completely replace humans. On that, I agree with you.
Paul Romer: But I also mean this: Not for ethical reasons or to kind of supervise but just to get the job done, I think you’re going to need the human in the loop. And so I guess, let me try and restate the distinction: You’re thinking there are things that, say, people could just offload. We let the machine do it, and the human can build on that. There are some places where we’re close to that. I think one where I’ve actually used it myself is there’s a Whisper AI that will give you a text transcript of an audio recording. Maybe you produce text from this podcast. The AI, the machine, is pretty good at that. There are still mistakes. I still have fixed up those transcripts, but they’re pretty close to being things you could just turn loose and not participate.
But they still make mistakes. And so I still have to do a few fixes there. And I think that’s still going to be a lot of the experience. And I think that what we’ve seen repeatedly is that very rapid progress early on makes people think, “Oh, we’re going to very soon get to point X.”
We fail to anticipate how quickly the diminishing returns or the extra effort kicks in. Autonomous vehicles made rapid progress initially. Now they’re finding that the few rare cases that come up, that are very difficult to anticipate, are very hard to solve. So I think we’ve got to be careful about this extrapolation from rapid progress. And we’ve got to look and say, “Where are the cases where we’ve actually gotten all the way to the point where we’ve just taken the human out of the loop?” And if it’s hard to get there, in most cases, we’re going to make systematic mistakes if we’re betting based on the high rate of initial progress.
Shervin Khodabandeh: I totally agree with you. And I think that there might be even no-fly zones altogether in areas where this expectation that AI is 100% removing humans is a false expectation.
Paul Romer: But let me drill in on this. Do you think that autonomous vehicles are going to work? Is there money to be made investing in autonomous vehicles?
Shervin Khodabandeh: I don’t know the full economics of it, right? But from a capability perspective, I would say that there is a future, maybe it’s in 10 years, maybe it’s in five. It’s not in one or two, but not in 50. I think there’s a future where there will be autonomous vehicles that, in aggregate, would make far fewer errors and fatalities than human drivers do.
Sam Ransbotham: Shervin, that’s an important point. Our benchmark isn’t perfect humans. We had 43,000 deaths last year in the U.S. from human drivers. One way to think about this would be to separate out the problem. For example, think about the difference between long-haul trucking and driving around narrow Boston streets. It’s much easier for me to see that we’d make progress on long-haul trucking first. We talk about these problems in big clumps, but we just don’t have to.
Paul Romer: I think I’m still a lot more skeptical than you about the autonomous vehicles. I think we’ll actually create some new domains where semiautonomous or fully autonomous vehicles can operate, like tractors. You know, John Deere is pursuing this.
But I think it’s revealing that Apple has just decided there’s no money to be made in this line of investment after investing a bunch of money. I also think it’s important to remember that what matters here is the social acceptability of these technologies. It isn’t just whether somebody can argue in statistical terms that this is safer than what we usually see with people. If you look at airplanes, you could argue that we’re making airplane travel way too safe, because it’s much more dangerous to ride on a bus or take a train, but [it’s] very clear the public does not want aircraft travel to be more dangerous. So we have to just accept that even if, on average, the autonomous vehicles are, say, safer than the people, voters are not going to accept machines that kill people.
Shervin Khodabandeh: Or the same could be said in medicine, right? I mean, there are examples of AI in radiology where it is performing better than radiologists. I would still like a real radiologist and a real doctor.
Sam Ransbotham: But, Shervin, you want the doctor, because you live in Los Angeles, where you’ve got access to great medical facilities there and so do we in Boston. I think if we had a conversation with some different people, we might have a different perspective.
Shervin Khodabandeh: Yeah, that’s a good point, too. When I think about most of what humans do, when I think about the rest of everything that people do in their regular jobs — which is sort through documents, summarize things, search knowledge — more and more of those things can be done with AI, arguably better.
Paul Romer: I think my answer is, let’s take it one step at a time. Let’s start with the things where I think we can make some adjustments, trying manufacturing, but watching that alone may not be enough. Then I think there are tools we could use. You could, for example, use the tax system that subsidizes employment of, say, manufacturing workers, so that the trade-off for any one firm might be, “OK, well, I can get subsidized workers, or I can just pay the full freight.” Or you could tax higher-income workers to subsidize the lower-income workers, or you could tax the machines or tax the transistors but use the revenues to then, say, subsidize the employment of the workers.
Shervin Khodabandeh: Do you think one of these tools could be or should be in [the] regulation of AI? And, look, I’m hardly the person to advocate something against innovation. Right? It’s like my own passion and my own career. And so I don’t mean it in a way of completely stifling innovation, but do you think that there is a time and place for governments to play a much heavier, a stronger sort of role in no-fly zones with AI, things that you can and cannot do? I mean, [this is] not that different [than] what we did with some of the genetic sciences, right? There are certain things that we don’t allow people to do, even though we can do it.
Paul Romer: Let me get at what I think is part of your question and even sharpen it. This idea of allowing the globalization of ideas and sharing of ideas, I was saying we’re still committed to the discovery of new ideas and to innovation. Other people are saying, “Well, we not only want to limit how you turn those ideas into physical products. We may even want to steer innovation in a particular direction, like steer it away from some things and steer it toward other things, but then we might even get to a point where we say, ‘No, we just want to slow the innovation down. We want to slow down progress because it’s just too hard to cope with the stresses that will cause.’”
I haven’t gotten to that point yet where I say, as a voter, I’m ready to slow things down. But I can see the possibility of coming to that conclusion. If you think the choice is we’re going to lose democracy and rule of law or we’re going to slow down technology, I wouldn’t have much trouble making that choice. I don’t think we’re there. At least that’s not the first thing we should try. But I think we should think carefully about what our priorities are. And I don’t think technology for its own sake is the be-all and end-all.
Sam Ransbotham: I like the idea of pausing. It’s always appealing. I have teenage kids, Shervin has teenage kids. I think the idea of pausing and enjoying that period longer, that seems great, but I don’t know how realistic that is. You know, my background was working with the United Nations and the weapons inspectors in the weapons of mass destruction phase. We’ve done a very good job of limiting that. We haven’t had any big explosions since 1945.
But that had physical goods to it. It had a physical thing that you could control. And the same thing is true with, for example, food safety. We have inspectors. We have regulations, supply chain. We have things we can regulate. Do we have the same tools that we had in prior iterations of technology, like biotech?
Paul Romer: I was thinking the same thing before Sam spoke, which is that even if I agreed [that] things have gotten so serious [that] I’m ready to try and slow down discovery, you have to ask, Is it even feasible for us to slow down? And that’s going to be a real constraint.
Sam Ransbotham: Some of your ideas here about jobs seem like they depended on the idea that there’s still going to be this Zeno’s paradox of approaching full automation. And as long as that exists, we’ll still have a human role. So that’s a pretty appealing argument as a human. I like that idea.
Paul Romer: Yeah. But if I go to this older idea of the race between education and technology, I think one of the things we have to keep in mind is that we could slow down the technology, but we could also speed up education. And we should think really hard about what do we do to keep raising skill levels. Traditionally, we did that through things like extending the period of life that somebody spends in school. That’s getting harder and harder to do. We might get better at getting more productivity out of the years that people spend in school. But a lot of people put work into that, and we haven’t made a lot of progress there either. The point that I got focused on at the World Bank was that a huge amount of skill acquisition actually comes on the job.
If you look at a society, a typical society, it may produce about as much skill through learning on the job as it produces in its school system. The average amount of skill we think is produced per year in school is higher than the amount for, say, one person for a year in school, or one person for a year in work. You don’t learn as much when you work, but there are a lot more people who are working. So you can end up producing as much human capital on the job. And I think we should be paying much more attention to the potential for jobs to actually enhance skill. This is actually one of my biggest concerns about some of the applications of AI right now.
Have you had Sal Khan from Khan Academy on your podcast?
Sam Ransbotham: No.
Paul Romer: I haven’t spoken to him ever, but I know he’s an optimist about how AI might actually be able to improve the rate of learning.
Sam Ransbotham: We did have [Zan Gilani from] Duolingo. I think Duolingo was a great example of exactly that.
Paul Romer: They’re another one. And they’ve kind of delivered, I think, on that. So those are the people who are optimistic about AI, who I frankly put some stock in what they’re saying. If we took that seriously and said that the only way this will be socially acceptable is [if] we do a better job of educating, we’d better put a lot of resources into figuring out how to use AI to improve education and to measure so we know that we’re really improving. And if we did that, we might come out OK.
Sam Ransbotham: We have to be careful because we’re university professors here, but the idea of this micro learning is a big deal. We’ve somehow gotten wrapped in this idea that the 15-week semester is the magic unit of time, and three hours per week is magic. And there’s nothing that says that’s the case.
Paul Romer: My wife is a big user of Duolingo, and she keeps learning another new language. It seems to work for her, but I’d love to have some more evidence about how Duolingo or how Khan Academy can work. Right now, the big successes in those areas seem to be in enhancing opportunities for people who are already pretty good at learning languages or learning skills, so what would really be great is to see how we use AI to help the bottom half of the class in school. But maybe that’s the optimistic message to come out of this. It’s a little bit like what I was saying — instead of trying to slow down technology, let’s point it in a direction.
I’m really impressed with what the military is doing in trying to understand what you can do with these technologies and do something to meet their mission. If we were as serious about using AI in improving education as the military is — part of what I like about the military is they know they can’t survive on hype. It’s really got to deliver or they’re going to fail. But if we were that serious about using AI and developing it to help us in education, then we might actually end up with better technology and benefits that are more widely shared.
Shervin Khodabandeh: And now the question is, How can we use the capability and direct it in that way? And I think what I’m taking away from this conversation is it’s just not going to happen necessarily automatically. Because automatically what would happen is probably what you said, which is more efficiency, think more widgets per hour, less skills. That’s probably [taken] for granted.
Paul Romer: And more and more inequality.
Shervin Khodabandeh: Yep.
Paul Romer: It’s interesting, as I listened to you, I used to say that I was the most optimistic economist I knew. Back in the 1980s, as we were coming out of the inflation of the ’70s and limits to growth, and just this doomster kind of mindset, I was saying, “No, look, technology can go like gangbusters,” and I was kind of right about that. What I didn’t anticipate, though, is that it could also — if we didn’t take the right supporting measures — lead to a lot of inequality. I’m really disturbed by the degree to which U.S. society has both benefited from very rapid technology but has opened up these big, growing inequality gaps. And it’s very hard to get at this with just the straight income inequality data, but look at life expectancy. People who are just high school educated are not increasing their life expectancy the way people who are college educated have; they’re even suffering real decreases in life expectancy. Life is really not turning out as well for them. I really feel like the failure was not [about having] enough technological development, but we didn’t share the gains widely enough. And we’ve got to work hard at figuring out how to do a better job with that, going forward.
Shervin Khodabandeh: Yeah. Well said.
Sam Ransbotham: One of the [ways] we close this show is by asking five questions. Our first question is usually, What’s the biggest opportunity with AI? I think you’ve just mentioned learning, and I think we’ll take that as your first answer.
Paul Romer: OK.
Sam Ransbotham: But what’s the biggest misconception about AI people have?
Paul Romer: The idea that AI can write software. The idea that it can write code. I think that’s just claiming way too much.
Sam Ransbotham: What’s the first career you wanted?
Paul Romer: I wanted to be a physicist. I switched from physics as an undergrad to economics because I thought physics would be too hard. And it was also a time when the U.S. was closing down the space program, and there weren’t as many jobs in physics and so forth. But for most of my life, I wanted to be a software developer, and now I’ve had a chance to play at that for five years. Maybe I could try and be a physicist in another five years.
Sam Ransbotham: When are we using too much AI? When do we need less AI?
Paul Romer: You and I have talked about this, but I think we’ve got to make documentation easier for students to access, particularly the Python documentation, because if it were easier or a little bit more responsive, I think they’d spend more time checking out the documentation and they’d learn a lot from that. Just asking one-off queries, like, “How do I do this? How do I do that?” I don’t think they’re seeing the big picture. We’re doing too much of ChatGPT-type questions about Python and not enough students reading good documentation.
Sam Ransbotham: Just the backstory. Paul and I spend a lot of time arguing about Python and how to help people get sharper on Python. What’s one thing you wish AI could do that it can’t?
Paul Romer: I wish I could manage my time better. I think a lot of what we learn is not so much facts we retain, but we learn to keep going even when things are going badly. [Say we hit] a wall and we’re stuck. How do you keep going on things? I wish AI could help me, almost like a coach. I wish I had an AI coach that helped me know, because it’s a very subtle decision. Sometimes you’ve got to just drop it. It’s not going to work. Give up on that path, go do something else.
Shervin Khodabandeh: I was going to ask, Would you listen to it?
Paul Romer: If I controlled it, if I could control it, yeah, maybe.
Sam Ransbotham: We didn’t even get into that — the whole thing about the domination by the tech giants here. Paul, it’s been great. I know we’re running short on time. It’s been great talking to you. Thanks for taking the time to talk with us today.
Paul Romer: Well, maybe we should make this an annual thing. Have me back again in a year and we can say, “OK, let’s update which direction have things gone? Have they gone the way we thought? Or what was new?”
Shervin Khodabandeh: We’d love that. At the rate it’s going, maybe it should be semiannual.
Thanks for joining us today. Next time, Sam and I meet Mario Rodriguez at GitHub. Talk to you then.
Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn specifically for listeners like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.
“The MIT Sloan Management Review is a research-based magazine and digital platform for business executives published at the MIT Sloan School of Management.”
Please visit the firm link to site