The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.
As an associate professor at Harvard Business School and cofounder of the Customer Intelligence Lab at the school’s Digital Data Design Institute, Ayelet Israeli’s work is focused on how data and technology can inform marketing strategy, as well as how generative AI can be a useful tool in eliminating algorithmic bias. One of the products of her recent work is a paper she coauthored with two Microsoft economists and researchers on how generative AI could be used to simulate focus groups and surveys to determine customer preferences.
Ayelet joins the Me, Myself, and AI podcast to discuss the opportunities and limitations of generative AI in market research. She details how the research was conducted and how artificial intelligence technology could help marketers reduce the time, cost, and complexity associated with traditional customer research methods.
Transcript
Sam Ransbotham: How can using generative AI help us understand consumer preferences? On today’s episode, hear from a professor about her market research study.
Ayelet Israeli: My name is Ayelet Israeli from Harvard Business School, and you’re listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of analytics at Boston College. I’m also the AI and business strategy guest editor at MIT Sloan Management Review.
Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.
Sam Ransbotham: Hi, everyone. Today, Shervin and I are thrilled to be joined by Ayelet Israeli. She’s associate professor and cofounder of the Customer Intelligence Lab at the Digital Data Design Institute at Harvard Business School. Ayelet, thanks for taking the time to talk with us. Let’s get started.
Ayelet Israeli: Thank you so much for having me.
Sam Ransbotham: Often, we begin by asking guests their professions. But what’s nice about being a professor is that people kind of have an idea of what that means. But I still think it’d be nice to hear a little bit about your background and bio. So can you take a minute and introduce yourself and tell us what you’re interested in?
Ayelet Israeli: All right. I’m a marketing professor at Harvard Business School. I’m really interested in how we can better leverage data and AI for better outcomes, if it’s outcomes for the firms, for customers, for society at large. Some of the work I’m working on is around gen AI and how firms can use that to gain better access to consumer information and preferences. In other work I do, I think about how we can eliminate algorithmic bias in our decision-making.
Sam Ransbotham: I saw your talk a few months ago about using generative AI, and it really struck me as interesting because lots of people are talking about generative AI, but we don’t have a lot of evidence yet.
Ayelet Israeli: Mm-hmm.
Sam Ransbotham: The evidence … is not saying it’s not there, but it’s just forthcoming. But you’re starting to get some evidence through this research that you’re doing. What can we do with GPT and generative in market research?
Ayelet Israeli: Me and two of my colleagues that are at Microsoft, Donald Ngwe and James Brand, started thinking around, can we actually use GPT for market research? The idea was, some people have shown that you can replicate very well-known experiments, including the famous Milgram experiment, using GPT by just asking it questions. And we were thinking, “We work so much as researchers and as practitioners to better understand customer preferences; maybe we can use GPT to actually extract these kinds of preferences.”
For large language models, the idea is that they will give you the most likely next word. That’s how language is produced. And we were thinking, “Maybe if we ask GPT or induce it to make a choice between two things, maybe the response, which is kind of the most likely next word, will actually reflect the most likely responses in the population. And in that sense, we will essentially query GPT but get kind of the underlying distribution of preferences that we see in the population.” And we started playing around with that idea. We focused on consumer products — because we assumed that the data that GPT is aware of is mostly around consumer products, maybe from review websites or things like that — to see, can this idea actually work?
Shervin Khodabandeh: And does it?
Ayelet Israeli: Kind of!
Shervin Khodabandeh: That’s wonderful. So tell us more.
Ayelet Israeli: Our first rush was like, “OK, let’s see if it can generate very basic things we expect from economics. Like, when the price is higher, does it know to reject an offer? Does it know to make this trade-off between price and choice?” And we do see kind of a downward-sloping demand curve, which is what you would expect to see when we query GPT thousands of times to get answers. We also see things like, “Oh, we can tell it something about its income, and it reacts to that.” When it has higher income, it’s less price-sensitive, which makes sense — it’s what we expect from people as well.
We also see that it can react to information about itself: “Oh, last time you bought in this category, you bought this particular brand” makes it much more likely to pick up this brand in the future. So those are kind of our tests of “Does it actually react in a way that humans would in surveys?” And then we took it one step further, and we were trying to get willingness to pay for products or for certain attributes. And then we basically compared the distribution of prices to distribution of prices we see in the marketplace, which is pretty consistent.
And a really interesting and exciting thing for us was the ability to look at willingness to pay for attributes, because it’s something that we all, as marketers, want to find. In our example, it’s toothpaste, and we’re trying to figure out how much people are willing to pay for fluoride, which is something that is difficult for us to think about. If someone would ask you that — “I don’t know.” I do know that I prefer to buy this toothpaste, but I don’t know what is the number. So it made us more curious to see if GPT can provide us this number in the same way that we ask consumers. And the way that the researchers have shown over years, the best way to ask these questions is through conjoint studies. Essentially, you provide people with 10 to 15 choices, and through their different choices, you are able to understand the trade-offs that they’re making and actually quantify the difference that they’re willing to pay.
We essentially did that. We did a conjoint-type analysis with GPT, and we compared the outcomes to human studies that a forthcoming paper just ran and got pretty similar results, so we were very excited about that. Of course, the results are not identical. We need to do a lot more to figure out where some of the issues are and how much does this generalize, but just the fact that we were able to get it is incredibly exciting.
Sam Ransbotham: So it seems exciting for firms because I’m guessing that the cost of doing a market study on a lot of people is much more than doing it just through a bunch of API calls with ChatGPT. That has to be the appeal. Are there other appeals?
Ayelet Israeli: Basically, these types of studies are time-consuming, costly, and complex. Ideally, you would like to ask people to make a lot of trade-offs, but you’re limited by the human ability to do that. With GPT, you can query it a lot of times. But at this point, I’m not going to tell anyone, “Replace all your human studies with GPT or with another LLM,” because there is a lot more work to be done to figure out how to do that right.
One of the things around GPT is that it’s pretrained. It will give me preferences, but these preferences are relevant for the time period in which it was pretrained. And a firm wants to know, “What are the customers interested in right now?” So that’s kind of a limitation.
What we’re testing now is, maybe we still have to query people, but less people than you would normally have to. So usually when you run these studies, you need thousands of users to get something that would be robust and statistically significant from an academic or statistical standpoint. We’re trying to look at, maybe I can collect information from much fewer humans and combine it with LLM through fine-tuning and generate something useful. But really, a big advantage would be cost savings and time savings.
Sam Ransbotham: The time was a big one.
Ayelet Israeli: Yeah. And we’re talking so far about consumer products, but you can think about business-to-business type surveys, which are way more expensive and harder to do. So perhaps there is potential there as well. We haven’t tested that yet.
Shervin Khodabandeh: I love the idea. I mean, when you think about most use cases for generative AI, there’s a lot about taking drudgery out of the work or creating images and content and summarizing text. And then there’s more-advanced ones around planning and inventory management. But the one you’re talking about is literally replacing humans with this, right? I mean, that’s basically what it is.
And it’s a beginning of something that could be quite interesting, because you’ve proven, at least, that it’s sort of rational, right? I mean, you’re asking it all these questions, and it is economically, I guess, rational. But then, as a marketer [like] you are yourself, not all marketing strategies are based on rationality. In fact, many of them are based on completely irrational desires.
Ayelet Israeli: Right.
Shervin Khodabandeh: What are your thoughts on the nonrational choices that many people make that create these big brands and $20,000 handbags and all kinds of stuff like that? How do you tap into that?
Ayelet Israeli: Before I answer your question, the first thing I was nervous about as an academic is when you used the word proven.
Sam Ransbotham: Prove — I heard it!
Ayelet Israeli: I see Sam is …
Shervin Khodabandeh: I smiled when I said it.
Ayelet Israeli: I would say we showed evidence consistent with that. And we also know that these models are still evolving, and maybe something we showed a month ago will not be relevant in a month from now, which is also a reason why you shouldn’t just go and implement it without testing. So I want to be careful about that.
Shervin Khodabandeh: Yes.
Ayelet Israeli: So you know there is the more rational view of what is a product, but brands have value that is created that is kind of not measurable to us and hard to quantify. But that’s almost like the example I gave with fluoride. Like, we don’t know how to quantify fluoride. We might find it difficult if I would ask you, “Oh, how much are you willing to pay for a brand name like Colgate versus a toothpaste that I just made up?”
Actually, the same model of conjoint study will be able to infer those differences. And we see preferences, for example, for Mac over a different computer type. So it’s already embedded in there, in a way.
Now how accurate it is — it’s an empirical question.
Shervin Khodabandeh: Yeah, no, you’re so right because as I heard you respond to this question, I also realized that my assumption that what you showed some evidence for, vis-à-vis proven, isn’t necessarily rationality. It’s that it’s got an ability to sort of encapsulate what most people do — or what many people do — which is embedded in stuff that it was trained on. So then my second question is, how do you get this to be more segmented or more specific or more nuanced? Because when you do focus groups, you’re looking maybe for a particular flavor/particular nuance mix.
Ayelet Israeli: Yes, and also, a lot of the uses that we have seen when GPT and other LLMs were just introduced, a lot of the excitement was, “I’m an engineer. I can just ask it a question. It gives me the most common thing. That’s exactly what I want.” And actually, what we are doing is the other side of that. We don’t want the most common thing. We want to understand the distribution.
That’s why when we query GPT, we ask it every question many, many times — because we want to get many, many different consumers. In our analysis, we only varied income and what you bought before. But we can, in the same way, vary gender, race, anything else that you want … age. And I’ve seen other researchers do that. …
There is a really interesting paper by colleagues at Columbia and Berkeley that used GPT to create perceptual maps — how close two brands are to each other. And they also showed differences by gender and age and things like that around cars, which is a market where we expect to see these differences. So you can definitely do that, too, in a similar way. And it was also shown in political science for politics. I can give someone an ideology, and their voting behavior makes sense, their text generation on different topics makes sense. That’s also very exciting as a marketer who cares about heterogeneity and understanding the differences between different consumers.
Shervin Khodabandeh: Yeah. If only we could use this for clinical trials.
Ayelet Israeli: I saw some paper on better bedside manner of LLMs relative to doctors, so maybe there’s still something there. [Laughs.]
Sam Ransbotham: That’s GPT-5, maybe.
Ayelet Israeli: Yeah.
Sam Ransbotham: As you’re saying that, though, I think about the way these work is a probabilistic estimate of the most likely next word, the most likely next … and you’ve segmented out “Given that you are low income, high income, given that you are this attribute, that attribute …” That’s interesting, but where do we come up with the weirdness, then? If everything is based off the “most probables,” particularly from predefined [parameters] — not that you’re not brilliant about coming up with a nice search space, but how are we going to find the things we don’t know, then? Isn’t that something that comes out of market research and focus groups?
Ayelet Israeli: Certainly, and that’s part of the challenge. Obviously, GPT learns some kind of distribution, but there are people that, you know … let’s say all that it learns is from reviews. There might be a lot of very extreme consumers that don’t write reviews online or don’t have access to the internet but have these interesting extreme ideas. And even if I tell GPT, “I want [as much] randomness as possible, very high variation,” I will not get to those people. So that will definitely be a problem.
I know already of some startups that are trying to solve this issue and identify these extreme consumers and then take them to the next level by using LLMs to maybe predict what they will do in another case. But at the same time, there has been some work on [the] creativity of GPT and that it creates very creative ideas, which, you know, is not exactly what you’re asking for.
Sam Ransbotham: Some of those creative ideas are unconstrained by reality. I think we’ve all seen some of it, [like] the way that it plays chess and decides that that rule is a little bit too confining.
Ayelet Israeli: Right. So that’s also the problem of hallucinations, which should be tested in different contexts. But I think the way that we induce it to make a choice is less prone to hallucination problems because it provides a choice and you’re not asking for facts or something like that. I’m not trying to say that GPT will outperform any customer survey or anything like this. All I want to see is if it is as good as humans.
And even with human customers that we talk to, we have to work really hard to find people to do these surveys, and sometimes we miss them. We might be able to get the distribution of some people but still have to work hard on the extremes without AI but with just human conversation.
Shervin Khodabandeh: What I find really interesting here is, you said something like, “It’s not as good as a consumer survey,” and now I want to challenge that. Because what I find interesting in this idea that you have is that when you think about other AI or gen AI use cases, there is a sort of burden of proof that you say, “OK, so I’m a human. I’m an engineer. I have a task. Let’s ask GPT,” or any generative AI system, whether it’s, let’s say, knowledge kind of work, whether it could do it as well as a human does. OK, great. Or can it code better than a human does? Or can it create a video or a document or something that you would read and you’d say, “Wow, this is nice. So then you could do it. I don’t need to do it.” Right? So that sort of a burden of proof is very clear.
On this one, I’m not so sure that you even have to have a burden of proof, because in many ways we’re assuming that a focus group of 500 or a thousand people, or any survey — I mean, there’s no focus group that big that I know of — but a survey of that kind is somehow gospel or, like, that’s like what GPT or whoever, whatever —
Ayelet Israeli: Can you talk to the reviewers of our paper? [Laughs.]
Shervin Khodabandeh: Because the reality of it is, if you think about it, it’s that if the only way to know … so go back. Because, look: Your premise here is like, “We are going to save so much money on all this market research by augmenting this with that,” which is a true premise, and for sure it is. But I also find the burden is lower. And even if you don’t stop a single human-based market research or survey, you’ve still added a ton of value by broadening the universe of responses and options.
Because I would argue, how do you know 1,000 people or 2,000 people are representative at all or that they have all those nuances? And so this thing is actually bringing in signals that for a fact exist because otherwise you wouldn’t be there. And I find that actually quite inspiring to a marketer. I’m happy to talk to your reviewers.
Ayelet Israeli: I think as academics, we are used to a certain level of rigor and robustness and ability to say, like, oh, to actually prove things, and the fact that this tool can provide a simulation of something is nice, but “Can it actually replace humans?” is a higher burden because of this question of, is it actually giving me meaningful, updated responses? Will it match something? And you’re saying, “Well, maybe humans aren’t that great in the first place, so why do we try to … ?”
Shervin Khodabandeh: No, I’m actually making a different point. I was trained as a scientist, and I get the burden of proof is much higher in science and in academia. And I wasn’t trying to argue that you’ve proven that this replaces humans. I don’t think it’s replacing humans. But what I was trying to say is, the value of this is that it dramatically augments the signals and insights and ideas available to a marketer and because there is no survey or focus group that by definition isn’t limited, and this isn’t limited because it’s got everything that’s there. So my point simply is not that the burden of proof has been met but that I don’t even know if there should be that kind of a burden of proof, because it is addressing a limitation of focus groups and traditional research. So it doesn’t necessarily need to replace it. They’re not perfect to begin with. Nobody would argue with that.
Ayelet Israeli: Yeah. I think, at the very least, I feel comfortable saying that we showed that it could be very informative about preferences and what is going on, at least within the data it’s trained on. And that could already change a lot for a lot of firms, given the type of research and the problems with market research and access to humans and all of that. For sure.
Sam Ransbotham: So there’s multiple different signals coming in here, and I think we’ve addressed this first from the idea of, does this signal replace the other signal from a focus group? But the dependent variable here might be, do people actually buy a product? Do people buy the fluoride? Do they buy the [fake] product?
Ayelet Israeli: Right.
Sam Ransbotham: And if this signal adds some information to that prediction, then we’ve got a new information source. If it completely supplants it, then we have a different thing.
Ayelet Israeli: Right. And now we’re going to the problem of these surveys of stated preferences versus revealed preferences that are actually based on what people do. Now, I would argue that GPT might have less [of a] problem than humans because it’s not subject to things like experimenter bias or trying to appease me. So it’s probably giving me something closer, but it’s still giving me something likely closer to stated preferences if it brings the data from review sites or market research and not necessarily [giving me] what people would actually buy. But that is also true about the focus groups and the surveys.
Sam Ransbotham: So we think about this as a new source of signal — that there are lots of different signals out there, and it has some overlap, perhaps, with one signal. And I think that itself is fascinating, but it may also have a new signal.
Ayelet Israeli: Yeah.
Shervin Khodabandeh: The other thing that I find fascinating here is that AI solutions have been trained on data, and then, when they’re put in production, they are then trained on data or they get feedback from data in production, and they get better. With generative AI, so much of that feedback also needs to be human-driven versus data-driven, right? Like, this is what it tells you to do. Does it resonate with you? Yes, no, etc. So it also feels like this kind of a technology, where generative AI can be a user of another generative AI’s output.
So let’s go to the paradigm of, look, it’s replacing a human in the focus group, or we can also replace a human in a company that’s a marketer dealing with a response from generative AI on, like, “How do you design a campaign for this?”
Ayelet Israeli: Mm-hmm.
Shervin Khodabandeh: And so this idea of maybe multiple generative AI agents going at each other to improve the overall quality — what do you think about that?
Ayelet Israeli: I think it’s an interesting idea. But I also think that the evidence so far suggests that you still need, at some point, at least one human in the loop …
Shervin Khodabandeh: For sure.
Ayelet Israeli: … because of all of these hallucinations, unrealistic things that come out. But certainly, if these models are getting better and better, more efficient, higher quality, then why not? As we implement these type of things in our organizations, we also need to think about how do we — I don’t know if the word is exactly validate, but how do we ensure that the process still makes sense and that we’re not just wasting everyone’s time with these agents talking to each other?
Shervin Khodabandeh: No, for sure. You’re 100% right. You need humans in the loop and probably for many decades at least. But you may not need so many of them. You know, if you have some kind of an output that is supposed to be helping, let’s say, a group of 20,000 customer service reps, and it’s going to get better based on the feedback, based on their usage in a pilot of, let’s say, three months, maybe you don’t need to pilot this to 5,000 people. Maybe you could pilot it to a hundred people plus two or three different gen AI agents so that you dramatically accelerate the adoption time.
Ayelet Israeli: Yeah, that’s cool.
Sam Ransbotham: Although I have to say, when I heard you saying that, Shervin, what it made me think of is when people hold a microphone too close to a speaker and we get these feedback loops — amplifying feedback loops. I do worry that if the two sources of data are too, co-aligned, we’ll get squelched.
Shervin Khodabandeh: That’s true.
Sam Ransbotham: We won’t get craziness. Skip to the back of the chapter here: Give us the answers. People are listening to this, and they’re working in companies, and they have these tools available right now, not 20 years from now, like we’re thinking as an academic. What should people be doing right now with these tools?
Ayelet Israeli: Play around with them. Figure out … what do you want to know about your customers? We provide in our paper a whole list of prompts of exactly how to prompt for these types of things and start getting this information. And like Shervin said earlier, what is it exactly? We’re not sure, but it’s a signal. There is information there that we can start finding out, right?
Sam Ransbotham: And so by playing with it, that helps people discover what information is there?
Ayelet Israeli: I think testing and discovering. But starting with a concrete question is really helpful because you will just get down so many rabbit holes. You can have these conversations forever.
Shervin Khodabandeh: Ayelet Israeli, you’re the only guest we’ve had that has the “AI” initials, which nicely fits into Me, Myself, and Ayelet Israeli, which is Me, Myself, and Myself.
Ayelet Israeli: [Laughs.]
Shervin Khodabandeh: But tell us more about yourself and your background and how you ended up where you are and what got you interested in all this stuff.
Ayelet Israeli: Sure. I’m originally — as my last name might indicate — I’m originally from Israel. Israel is known to be “Startup Nation.” And when I came through to think about what I want to study in university, there was a special program that was geared toward improving Startup Nation by giving people kind of managerial tools. So it was a bachelor’s in computer science and an MBA combined program in five years.
And I started doing that, and I like computer science. I actually majored in finance and marketing, but I especially was interested in marketing and, particularly, making sense of a lot of data in this context that is so kind of fun and applied. And then I decided to get a Ph.D. in marketing.
Over the years, I figured that consumer products or things around customers and transactions are interesting to me. It’s just a fascinating world. You have a lot of data around that because as we move more to online and digital, we can see more and more data. And then the question is, “How can we actually leverage that data more efficiently and also in a responsible manner?” which a part of my research is about as well.
Sam Ransbotham: So we have a segment where we’ll ask you a series of rapid-fire questions to put you on the spot. Just answer the first thing that comes to your mind.
Ayelet Israeli: OK.
Sam Ransbotham: What’s the biggest opportunity for artificial intelligence right now?
Ayelet Israeli: Biggest opportunity. This is not rapid.
Shervin Khodabandeh: Next question.
Ayelet Israeli: Yeah, next question.
Sam Ransbotham: Oh, OK.
Ayelet Israeli: I’ll think about it.
Shervin Khodabandeh: Pass.
Sam Ransbotham: Pass. What’s the biggest misconception that you think people have about artificial intelligence right now?
Ayelet Israeli: I tend to be around people that work in this and understand this, that it is just a model, but a lot of people still don’t and still envision robots and this magical thing that happens. And that’s why I like to explain very clearly, “Oh, it’s predicting the likelihood of the next word and choosing them on distribution, and that’s all that is happening.” So I think we’re still maybe not as bad as it used to be 10 years ago, but it’s still this magical, artificial thing that happens, and it’s not. It’s still magical, I guess.
Sam Ransbotham: It’s pretty amazing — or can be. What was the first career that you wanted?
Ayelet Israeli: I don’t know. In Israel, you go into the military. I was in the military; I was a lieutenant in intelligence. I don’t think it’s a career I necessarily wanted. It’s something I did.
Sam Ransbotham: There’s a lot of discussion and excitement about artificial intelligence. Where are people overusing it? Where are people using it where it doesn’t apply?
Ayelet Israeli: I think one of the challenges I’ve seen is actually using it to ask it factual questions, because that’s not what it’s about. It’s not a truth-finding mechanism, and that’s just a wrong usage.
Sam Ransbotham: OK. Is there something that you wish that artificial intelligence could do right now that it can’t do? What’s the next exciting thing? What announcement tomorrow would make you happy?
Ayelet Israeli: I’ll take that question slightly differently. I think what excites me about AI in terms of my research on responsible use of data and algorithmic bias is that, yes, a lot of people have shown that AI can generate biased outcomes. We also have known for many years that humans generate biased outcomes. And what excites me about AI is that it’s much easier to fix biased outcomes by a machine and to generate processes that will eliminate bias, and it’s so much more difficult with humans. And that’s something that I’m really excited about.
Sam Ransbotham: I love that point because we’ve got all this bias and misogyny in our world, not by the machines. The machines are not the people who put us in this situation in the first place. And the fact that they maybe do a little bit of that at the beginning, before we’ve trained them, we shouldn’t just throw them out for starting down that path, because we can adjust the weights in models. We can give feedback to models to improve those in a way that we can’t with bazillions of people.
Ayelet Israeli: Right.
Sam Ransbotham: So I think that’s a huge point.
Ayelet Israeli: And we’ve seen the first models of gen AI images. If you say “doctor,” we’re only [seeing] photos of men or things like that. And over time, this has improved a lot. So that’s really exciting, right? We can try to think about how we fix some societal problems using these things because, yes, machines can be manipulated more easily than humans. Of course, that’s a risk, but that’s for some sci-fi podcast, not for this one.
Sam Ransbotham: The example of the doctor in the image is spot-on because I think so many people were fascinated by how accurate these models are because they felt right. They confirmed our stereotypes. You ask for this image, and it gives you exactly what you think of as that image, but that’s just feeding into the problem again. And that’s going to perpetuate it if we don’t [stop it]. But, like you say, there has been improvement there.
Shervin Khodabandeh: Ayelet, thank you so much. This has been really insightful and quite interesting. Thank you for being on the show.
Ayelet Israeli: Thank you so much for having me. This was fun.
Sam Ransbotham: Thanks for joining us today. On our next episode, Shervin and I speak with Miqdad Jaffer, chief product officer at Shopify. Before you do your holiday shopping, please join us to learn how little bits of AI everywhere can add up to big value for all of us.
Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn specifically for listeners like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.
“The MIT Sloan Management Review is a research-based magazine and digital platform for business executives published at the MIT Sloan School of Management.”
Please visit the firm link to site