Topics
Artificial Intelligence and Business Strategy
The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.
In collaboration with
More in this series
Jeremy Kahn’s investigation into the risks and effects of artificial intelligence are reflected in a new book, Mastering AI: A Survival Guide to Our Superpowered Future. But he has also written extensively about the technology in his role as Fortune magazine’s AI editor. On today’s episode of the Me, Myself, and AI podcast, he joins hosts Sam Ransbotham and Shervin Khodabandeh to share the insights on AI that he has gained through his work.
Their conversation explores a range of subjects, including people’s growing reliance on AI technology — specifically, generative AI, whose outputs are difficult, if not impossible, to trace back to a reliable source. They also discuss AI’s effect on critical thinking, how best to educate people about the technology’s risks and limitations, the value of cultivating employees’ adaptability, and how GenAI’s ability to simulate human interactions could be affecting people’s real-life interpersonal skills.
Subscribe to Me, Myself, and AI on Apple Podcasts or Spotify.
Transcript
Shervin Khodabandeh: Stay tuned after today’s episode to hear Sam and I break down the key points made by our guest.
Sam Ransbotham: In what ways are we overreliant on AI technologies? On today’s episode, we chat with a journalist who shares some of his fears about AI.
Jeremy Kahn: I’m Jeremy Kahn from Fortune, and you’re listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of analytics at Boston College. I’m also the AI and business strategy guest editor at MIT Sloan Management Review.
Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities, and really transform the way organizations operate.
Sam Ransbotham: Hey, everyone. Thanks for joining us today. Shervin and I are excited to be talking with Jeremy Kahn. He’s the AI editor at Fortune. At Fortune, he writes its weekly newsletter “Eye on AI.” He’s also the author of a book, Mastering AI: A Survival Guide to Our Superpowered Future. Jeremy, we’re looking forward to talking today.
Jeremy Kahn: It’s great to be here.
Sam Ransbotham: So I’ll raise a complaint first. I was planning just to skim the book and just, you know, try to pull out a highlight or two, but it was interesting, and so it unfortunately cost me a lot more hours this weekend preparing than I planned. So that’s my angry start to this podcast episode.
Shervin Khodabandeh: That’s a good complaint. This is what Sam’s compliments always sound like.
Jeremy Kahn: Yeah, yeah. I was going to say, “Sorry, not sorry” on that one.
Sam Ransbotham: Well, but maybe go negative here. We hear a ton about the benefits of AI, but let’s start with some of these risks that you foresee. I thought you did a nice job outlining those. For example, you caution against letting AI’s cognitive prowess diminish our own prowess. Can you walk us through how some powerful tool might diminish our own human abilities?
Jeremy Kahn: Yeah, absolutely. One of the things I’m concerned about is that this technology is, in some ways, so easy to use and so seductive that we’ll sort of overuse it and, as a result, lose some of our important cognitive abilities. One I worry about is just critical thinking. I mean, anyone who’s used one of these chatbots from OpenAI or Anthropic or any of the others — it gives you a very fluid, plausible pat answer and a very capsule summary of whatever information you want. And I think it’s just really easy to kind of take that answer and not think too hard about it and go off and say, “Oh, I’ve got the answer.”
And the way these systems are currently designed, there’s no real prompting of the user to think about the provenance of the information that they’re receiving. And, in fact, in some cases, the way these things are designed, there’s no way — the system itself doesn’t really understand the provenance of the information it’s giving you, so it can’t actually tell you how it knows what it knows.
Sam Ransbotham: Explain provenance though, to start with.
Jeremy Kahn: Yeah, so provenance: I mean, what was the source of this information? Any particular fact it’s telling you — where did that come from? Or its analysis: How did it arrive at that?
You know, people said some of these things about Google when Google came along, and there was some criticism that Google was going to make us stupid. But one of the good things about Google is, at least you have these links. So you have to think about, where’s the information coming from? At least there’s something there in the UX [user interface] that kind of prompts you to think about the source of information. If you want to know more than just the few lines that the link displays, you have to click the link.
You don’t have to do any of that with a generative AI-based chatbot response, and I worry about that. I worry we’re just not going to think too hard about, where’d this information come from? So that’s one of the ones I’m concerned about.
The other is the tendency with these LLMs to think that writing and thinking are completely separate activities: “I’ll just give the system a few bullet points, and it will write the whole essay for me. It will do all the hard work of thinking through the arguments.” And I worry about that because, again, I think writing and thinking are not actually separable activities — that it’s really through writing that we refine a lot of our reasoning.
Very often, you find holes in your own argument when you actually go to write down something long form that you haven’t thought of when you just outlined a few bullet points. And, again, there are some tendencies that were already there with other technologies. PowerPoint, most famously, has this tendency to reduce things to bullet points, often with some negative repercussions. It’s why Jeff Bezos had famously kind of banned PowerPoint presentations at Amazon: because he was very worried about the kind of sloppy thinking that came in when you just had people delineate their thoughts in bullet points.
Shervin Khodabandeh: First of all, I agree with both of your points, and I think the first point you made about “Is it making us stupider?” — I think we’ve actually seen that documented.
BCG did one of the largest controlled studies, with MIT, Harvard, and Wharton, where we looked at a thousand-strong consultants and they were given ChatGPT to work with. And in certain tasks that included broad creative thinking, ideation, they actually did better when they were told to use the tool versus those who weren’t, because it expands the realm of possibilities for many in terms of “How do you create a campaign?” or “How do you consider this from different angles?” But on things that consultants are supposed to excel at, like business strategy, problem-solving — “What is the best strategy for a company in Southeast Asia that’s trying to enter a market? Here’s the revenues, here’s the competitors, etc.” — GenAI didn’t really do as good. And in fact, what we saw is that group — there was sort of a reinforcement of the wrong thing because it was an overreliance, as you said, on something that doesn’t work. So for sure, we’ve seen it will take critical thinking away.
But I also feel like the paradigm change is when, before all these large language models, AI was a tool that would help us predict better or optimize or do things that humans couldn’t do. I think, as you say now, Jeremy, we’ve entered a phase where AI is doing that which humans could do too, and in many ways better. We think of it as more of a coworker than a tool. And so now you have a coworker, and just like with a coworker, you need to challenge them. And when you’re having a conversation, I find the best outputs that I get from a GenAI chat is when I really engage with it the same way I would engage a young associate or a young apprentice. And you challenge them, and, often, it says, “Yep, I made a mistake. I’ll go deeper. I’ll go deeper.” And if you don’t take it, as you were saying, at face value, you get to the answer.
What sort of worries me in the long term is the first thing you said, which is, are we overusing it? What have humans invented that they haven’t overused?
Jeremy Kahn: Yeah, that’s a good question. I think there is that risk. That’s why, in the book, I talk a little bit about why I think there should be very specific design choices to try to prompt people not to overuse it. But, of course, the companies developing the product have no incentive, really, to get you not to use it, so it’s a bit of a problem. But it would be useful if, as part of the design, there was something that sort of prompted you to think about the source of this information. You just talked about using it as a kind of a junior colleague; it’s kind of interesting. I actually think one of the ways you can avoid some of these risks is if you don’t use it that way: So you do the initial drafting of the document or the proposal. Or, you know, your example of the strategy for the company: The human consultant would do that initially and then ask the system to act as almost like a senior colleague to critique it.
And I think actually, sometimes, you both get the benefits of this large knowledge base it can draw from. It does sometimes point out things that you’ve missed or that you didn’t think about, and you don’t have the de-skilling risk that you have when you do treat it as the kind of initial drafter of things, as the one who’s going to do the work of the junior colleague. That’s maybe a way that we should train people to use the systems: to think about it more as a coach or an editor or auditor of our work as opposed to the initial drafter.
Sam Ransbotham: You know, that’s interesting because I’ve noticed that my own use of a plain-old text editor has gone up … because I find that all the spelling correction and grammar correction is a distraction when I’m trying to think about what I’m thinking about. And so I’ve gone kind of old-school and bare-bones here. Maybe I’ll be using Vim before too long.
Jeremy Kahn: Yeah, yeah.
Sam Ransbotham: AI’s not going to go away, though. What can people practically do to avoid this? I guess … one suggestion is, draft it first. Are there other things we could suggest?
Jeremy Kahn: Yeah. I think draft it first; I think training people about what these systems are actually good at and what they’re not, so at least you have that awareness. That should be part of how employees are kind of trained to use these systems as copilots, as well as students, as this gets introduced into education — that they’re taught, what are the strengths and weaknesses of these systems, and what can’t they do? And show people some examples of how it hallucinates or how it can miss whole lines of argument, maybe, because they’re not represented as well in the data set.
Shervin Khodabandeh: I agree with everything you’re saying because I feel the same way.
Sam Ransbotham: Well, that’s no fun. Come on.
Shervin Khodabandeh: I mean, no, but … there was going to be a but because I have children in high school, and I see the benefits; I see the risks. Like my son was saying … I saw him on [ChatGPT]. I’m like, “What are you doing? You’re doing math?” But he says, “Look, I don’t understand how to solve this problem. It’s teaching me, and now I’m going to do it on my own, and then it’s going to critique me.” I’m like, “Great; that’s fine.”
But the point that it can take a significant amount of cognitive load away from humans, and sometimes that’s good and sometimes that’s dangerous — I think that point is not lost on people. My worry is that, in reality, it’s not going to happen because this is very early stages, right? We’re talking about something that’s two years old, that right out of the gate was great, is getting better; it’s improved by orders of magnitude, and it has some flaws.
But the other thing I see is … I mean, we’re talking about individual use of it, right? But when you think about [how] all of these investments are basically done for the enterprise, are being used by the enterprise, are being consumed by the enterprise, and that’s where billions and trillions [of dollars] are going to continue to go in it — I still think that’s in very early stages. And what I worry about there is that the gap between the haves and have-nots is going to increase significantly at the company level, right?
Like we had … you know, Sam, in our research we talked about the AI gap and the 10% who are getting significant value versus most who aren’t, and what do they do? I feel like that’s even going to narrow more because of the sort of exponential power that this new generation has. And I think that the challenge that many companies are going to have is also the challenge of rethinking their skill set and their talent.
And now that I could use AI to do things that a thousand people could do — maybe make it 50% more effective in a coworker mode — what kind of talent do I need? How do I hire them? I think the 10% that we talked about in our prior generation of AI, Sam …
Sam Ransbotham: You’re predicting that gets smaller?
Shervin Khodabandeh: Right. I think those 10% are going to be those who, in the long term, figure out the talent play and the skill play, and they become sort of new companies.
Jeremy Kahn: I think you’re probably right about that. One of the things I highlight in the book is the risk that thems that got shall get — that sort of thing. You know, those that already are sort of ahead and have the largest pool of data and have figured out how to use this technology well already are more likely to continue to pull ahead of others.
I think that’s true both within professions and across industries. And I do take your point about talent, but I think you need both. I think you both need a workforce that is going to be extremely flexible. I think that’s the lesson here. … I’m not a big believer in the idea that we’re going to see mass unemployment from AI; in fact, quite the opposite. But I do think most people’s jobs are going to change, and you have to be flexible.
And I think the companies that have the most resilient and flexible workforces in some ways will do the best because I’m talking about how these systems work right now, and they might work very differently in 20 years. Absolutely, they will. And if you’re just starting your career, I think you have to be prepared to be riding this wave of this evolution of this technology. Today, it would be helpful to you to have quite a lot of prompt engineering skills, the way the systems work right now.
Shervin Khodabandeh: That’s right.
Jeremy Kahn: I don’t think that’s going to necessarily be true in 10 years because I think we will have perfected away some of that — optimized away some of the need for that.
Shervin Khodabandeh: When I was listening to you, Jeremy, my mind went into not just the workers being reskilled or retrained or maybe just new talent coming in, like companies hiring people that they normally wouldn’t hire because they have spikes in things that AI doesn’t have, but my mind went into actually the younger generation. And I was going to ask you, Sam, as an educator, as a professor, what do you think we could do in schools now for these young people? You know, when we went to school, if we thought deeply and we memorized a lot of things, read a lot, and did a lot of math, and exhibited hard work, we pretty much did well, and I think …
Sam Ransbotham: Spoken like a true engineer.
Shervin Khodabandeh: OK, fine. For science and engineering fields, that was the case. And what do you think this is going to mean for high school and college sorts of education disciplines five to 10 years from now?
Sam Ransbotham: Yeah. Shervin and I have kids approximately the same age, and I deal with university kids all the time, and I worry about this advice to be flexible. I mean, not that I disagree with it, but I wonder if it has about the same amount of substance that “buy low and sell high” does, you know? I think no one’s going to take the counterargument that “Oh, yeah, you should be more rigid and less flexible,” but how do we get them to do that? I mean, I have some techniques that we use in class, but, Jeremy, how do we go beyond sort of a “Yes, be flexible”? Is there something that people should do? Someone’s listening right now: What should they do tomorrow to be more flexible?
Jeremy Kahn: That’s a very good question, and I’m not sure I have really great answers for that, except that I think people should have to change fairly frequently. I think the only way to become adaptable is to actually practice adapting. I think some jobs are naturally more like that than others. Some have kind of rote processes or ways of doing things that they don’t change very often, or a person’s not asked to change very often in what they do, and I think some others have to kind of learn on the fly. You have to constantly evolve. Maybe you’re facing different challenges or questions every day. And I think for students, you could actually force them to learn different ways at different points … you know, even year to year or term to term. But I think that sort of thing, like building that into the educational system, would be a way to teach people to change and to adapt. I don’t really know other ways of doing it.
I spent some time talking to folks at NASA about how they try to teach astronauts to work alongside automated systems. In the book, I was talking about this issue about vigilance. If you’re constantly just putting the person in this mode where all they have to do is kind of babysit an automated system, humans are really terrible at it … and NASA tried all sorts of different ways to get people to be better at it. And they ultimately couldn’t get people to be better at it, except by introducing faults at random intervals so that people were constantly on guard, which is an interesting insight. But I think maybe we have to do something similar in terms of training students or the workforce, where you kind of artificially introduce disruption and change in order to teach people to get used to disruption and change.
Sam Ransbotham: That’s an interesting idea. Actually, it reminds me … I think you had an example in the book about an experiment in Georgia Tech — which I, of course, noted because that’s my background — but people followed robots who were clearly demonstrating that they were making mistakes over and over again, and … they still followed the robot, even after the robot was exhibiting these mistakes. I thought that was pretty fascinating.
One of the things you’re saying is, it’s incumbent on educators and the educational system to introduce doubt in these systems and introduce the idea that they will fail. That makes a lot of sense, and I guess if NASA does it, then I believe it.
Jeremy Kahn: I think we could draw a lot of lessons from both the space industry and the aviation industry, which have worked alongside automated systems of various kinds for decades. I think one of the most undervalued aspects of this is what they call human factor engineering or human-computer interaction. I think those things are really going to come into prominence.
Sam Ransbotham: One of the things about interacting with the device versus interacting with the people is this empathy question. And, in some ways, interacting with a device makes us less empathetic.
Jeremy Kahn: Yeah. I speak a lot about empathy in the book. It is, ultimately, I think, the thing that is uniquely human, in the fact that we can relate to one another based on our own lived experience and relate to others’ lived experience. And the problem with AI technology is, it has no lived experience. And chatbots can mimic empathy pretty well; they can sound sympathetic, empathetic, but it’s not authentic. It’s ultimately kind of a fraud. The empathy isn’t real. And I think we need to keep in mind this distinction between real empathy and its imitation.
And I worry, particularly with people using chatbots as kind of substitute friends, as kind of companions, that this is another kind of de-skilling of an essential human skill — and never [having] to talk to anybody else and never [having] to deal with real humans, who are much messier creatures to deal with.
Particularly, there’s some interesting studies about men who have been left alone with Alexa, the Amazon digital assistant, in an environment where they didn’t realize they were being recorded in any way, and they were really abusive to, particularly, the female-voiced AI Alexa assistant. Very quickly, things descended into kind of misogynistic language and talk. And then there were some studies of those men in their interactions later with real people, and they carried over, sometimes, some of those habits of dialogue to real relationships, and I think that’s dangerous.
And people were very worried about that with kids, actually. It’s one of the reasons there was actually some pressure put on Amazon to introduce a mode to Alexa that a parent can turn on, where Alexa will only respond if the child is polite and says “please” and “thank you.” Again, one of those things we might want to consider in other AI systems is, are we using these systems to kind of prompt us, as humans, to be polite in our interactions?
Sam Ransbotham: Let me transition here. We’ve got a series of rapid-fire questions. What’s the first thing that comes to your mind? What do you see as the biggest opportunity for AI right now?
Jeremy Kahn: I think drug discovery is maybe the biggest opportunity.
Sam Ransbotham: Yeah. We’ve certainly seen some examples of that with COVID. I hope we don’t have another, similar opportunity coming up. What’s the biggest misconception that people have about artificial intelligence?
Jeremy Kahn: That we’re all going to lose our jobs.
Sam Ransbotham: Yeah. The whole fearmongering of the job loss. What was the first career that you wanted?
Jeremy Kahn: The first career I wanted?
Sam Ransbotham: Yeah.
Jeremy Kahn: So I wanted to be an archeologist, at one point.
Sam Ransbotham: Yeah. So that actually really ties into your desire for data provenance, then. You can make a connection between the roots of data and the roots of … anyway. All right. When is there too much artificial intelligence?
Jeremy Kahn: I think there’s too much if it’s doing things where you want human emotion and connection. I think there’s a great example with that Google Olympics ad, where they had the kid writing the fan letter and the dad had the kid in the ad use AI to write the fan letter, and there was such backlash against that, I think rightfully so. That’s something where you want the kid to do that; that’s this connection with their hero.
And there’s a lot of learning that takes place in the writing of that letter. And that’s one of these fundamental human experiences that’s based on connection between two people. You don’t want AI doing that.
Sam Ransbotham: Yeah. I mean, this is why we have the machines: to do these other things; not to do the fun parts. Is there one thing you wish that artificial intelligence could do right now that it can’t?
Jeremy Kahn: Yeah. I would like it to actually do my shopping for me, which it can’t quite do. I think it’s going to very soon be able to do this. I’m really looking forward to it.
Shervin Khodabandeh: Jeremy, it’s been wonderful having you on the show. Thanks for coming on and for inspiring a lot of deep thoughts about the future.
Jeremy Kahn: Great. It’s been fantastic to be here.
Sam Ransbotham: Hey, Shervin, that was fun. It was a different conversation. We often focus on some of the positives from artificial intelligence, but I think Jeremy [shared] some good risks that didn’t [veer toward] crazy, existential, “robots are going to take over the world” types of risk but some very subtle “how AI is changing our behavior” types of risks. And I think that’s interesting to think about. What do you think people could learn from what Jeremy said?
Shervin Khodabandeh: I mean, just like any piece of technology, you cannot treat it like a black box. The users have to have a certain level of understanding of, what are the limitations, what are the risks, what kinds of things can I not worry about the output, and where are some areas where I need to sort of probe and push and go a bit deeper?
Sam Ransbotham: I think a certain level of understanding is the problem, though. What is that certain level? I mean, I don’t think we’re asking people to understand how, I don’t know, silicon transistors work. That doesn’t seem necessary for understanding the modern world of AI. On the other hand, we’re coming pretty hard against the idea of just accepting everything AI says, so I think finding that that middle seems tough. Do you need to be able to make a model yourself? Do you need to be able to use a model? Where’s the line?
Shervin Khodabandeh: I don’t think the understanding that I was talking about is a technical understanding, but it is a tool that has limitations. And just like when you’re driving a car, you need to have some understanding that when you’re driving 30 miles an hour and you apply the brake, you’ll stop much sooner than if you’re driving a hundred miles an hour.
Sam Ransbotham: And if it’s wet, that’ll change that equation.
Shervin Khodabandeh: And it will change that equation, depending on the condition of the road and your tires and all these things. I mean, in reality, some people learn those the hard way. And that’s why they teach these things in driving schools, and they teach people what to do in case of a skid. I think that’s sort of the analog. So while I think … first of all, I don’t think the conversation was necessarily dark. I think there’s just so much overexuberance over AI that a little bit of “back to reality” will be good.
Sam Ransbotham: That’s a better way of phrasing it.
Shervin Khodabandeh: I think a better way to paint [this], maybe even — which is not something Jeremy did — but [imagine] the most extreme potential negative situation to create that dialectic [so] that you could then start thinking of it the other way.
Sam Ransbotham: Switching to your tire example, I think that’s a good one. To me — and, I think, probably you’re the same, with your engineering background — you’re thinking about friction, you’re thinking about coefficients of friction that change as the surface changes. I’m in the middle of teaching our 16-year-old how to drive, and there’s so much that you take for granted that I had forgotten about.
Shervin Khodabandeh: Yeah. I mean, honestly, so much of my driving, even now, I think about things that I was taught, like when the conditions change. Most of them, thank God, are things that I haven’t needed to do myself experientially, but I was told: “It’s raining. There’s a thin film of oil. You will not see it or feel it unless it’s too late, so slow down.”
But that’s my point. When you’re driving, you’re not thinking about those things. But you do know that “Hey, it started raining. I’m going to slow down.” That’s what most people on the freeway do as soon as it starts raining a little bit because they’ve been told, “You just have to slow down because there’s going to be a film of oil, and it’s going to be dangerous.” And so …
Sam Ransbotham: You drive different places than I do, I think.
Shervin Khodabandeh: Well, what I was going to say, though, is that I still think “be adaptable” is a goal and an outcome we drive to, but the question is, how do you become adaptable? And I think it starts with education. And I think that, just like with any tool, just like when they taught people how to use spreadsheets, they said, “Look, you can have the totals, but then you need to have some kind of a check, and if you don’t, you might have a great spreadsheet, but then it’s going to make a mistake.”
And if you’re linking spreadsheets, I do feel like that is the piece that’s going to unlock this level of adaptability and caution, etc., which is just by demystifying it and by teaching — by having multiple situations of an answer gone wrong, an overreliance creating a real, unrealistic outcome from AI. I mean, there are many YouTube videos where there’s a conversation with ChatGPT, and it’s being convinced that two plus two equals five.
Sam Ransbotham: Going back to your education point, I think that was a good one, because to be able to do that, to be able to work in these ways, is going to require education. And I think most of our discussion has been about individual-level interaction, but what do we think about companies and how they should be educating? We’ve had guests — for example, Katia Walsh at Levi’s had a boot camp; Michelle McCrackin at Delta, they also had a boot camp. What kinds of things are people needing to do to promote that education, and what kind of education should it be? So boot camp is one answer, but even then, I’m not sure what the boot camp should have in it.
Shervin Khodabandeh: I don’t think the answer is a simple “Let’s everybody have a boot camp.” I think the answer is “Let us segment the types of use. Let us segment the context. Let us identify the various zones that GenAI is going to be leveraged in our company. There are some no-fly zones that, strategically, it’s not going to happen. There are some places where it has, maybe not carte blanche, but it has areas where the risks are so low and it doesn’t really matter.
Sam Ransbotham: Fraud detection.
Shervin Khodabandeh: Yeah. You don’t want to use it for fraud detection, right? Exactly. And then there are areas where it is, as always it’s been, human in the loop, human and AI. And now, “What can go wrong?” and then, “What are the guardrails?” and “What is the proper education for the type of user and the type of situation?” I think that’s the kind of curricula that’s needed.
Sam Ransbotham: What I liked about what you said was this pushback against … we hear a lot about boot camps, and I don’t want to be negative about boot camps, but they strike me as lacking some of the personalization that the tool offers. That we have a variety of cases, a variety of ways — you’ve listed a bunch of them — that we can use these tools, and the monolithic boot camp or the sort of classic corporate education doesn’t really feel right here.
I’m sure you have to do these things in your organization. I have to do them too. I don’t want to complain about them too much; I understand why many of them exist. But many of these corporate education tools are not particularly tailored to what you know.
Shervin Khodabandeh: Yeah, but look; I mean, this dates me as a person.
Sam Ransbotham: It’s OK; you’re in a safe space with me.
Shervin Khodabandeh: But I do think about spreadsheets, and I do think about the education I went through as a young consultant learning how to get the best out of spreadsheets because …
Sam Ransbotham: Right. You had to miss a total at some point to get that right.
Shervin Khodabandeh: And how to use a formula, or how do you do it better or faster? And as the tool evolved, then [there was] more education and more education. And I do actually think those things help.
Sam Ransbotham: Did you get to be a guru of spreadsheets by classes or by doing?
Shervin Khodabandeh: By both. But the classes really helped, and talking to people who were in those classes really helped. And then it depends on what you’re using it for. It wasn’t the same for everyone.
I mean, I think most consultants, when they start this, they will be in the world of numbers and spreadsheets, and they check and debug. And now the thing is so advanced that it’s debugging itself and it’s telling you, “Are you sure you want to do this?”
But I do think that, right now, this is a black box, and I think there is such a big variance within an organization in people’s understanding of risks and benefits, and what it can do and what it can’t do, and is it going to replace me or is it not going to replace me? There’s just such noise and level of misunderstanding that the only way to break through that is through real thoughtful design of content, and education that’s companywide. And I agree with you that I think, like for most corporations, the way they deal with this right now is by creating …
Sam Ransbotham: The monolithic package.
Shervin Khodabandeh: Right, exactly. Well, creating a monolithic package for education. But I don’t think we have yet developed the right curricula at scale for the learning and development of the enterprise. I mean, many companies have done it. We have done it at BCG because we use these tools all the time and we need to know. But for a typical company, for a non-digital-native company, I don’t think this these curricula exist. And I think that this is a really big concern for the chief human resource officers of these companies — to put in place these kinds of curricula.
Sam Ransbotham: Yeah, but it seems time because by analogy … I liked your spreadsheet analogy because spreadsheets are what opened up the world of computers to most people. They’re part of the killer app that opened up the use of personal computers for most people, and I expect that the large language models are going to have that same sort of effect with artificial intelligence. That’s the gateway, the inroad, for most people to artificial intelligence. And you’re right; I think most companies don’t have good curricula for that.
We’ve got a bonus episode coming up in a couple weeks talking about exactly this talent in the world of generative AI, so everyone can pay attention to that.
Thanks for listening today. Next time, Shervin and I meet Rebecca Finlay, CEO of the Partnership on AI. Please join us.
Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn specifically for listeners like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.
“The MIT Sloan Management Review is a research-based magazine and digital platform for business executives published at the MIT Sloan School of Management.”
Post details > link to site