You are currently viewing Podcast: How can leaders invest the time that AI gives back?
  • Reading time:20 mins read
  • Post category:Microsoft
image

[Music

MOLLY WOOD: Tomas, thanks so much for being on the show.  

TOMAS CHAMORRO-PREMUZIC: It’s a great pleasure. Thank you for having me.  

MOLLY WOOD: So, you’re a psychologist, an educator, an executive, an author. I’d love to hear a little about your career path and how your interest in AI developed alongside of that. 

TOMAS CHAMORRO-PREMUZIC: So I started my career as an academic, but I was always very interested in the real-life or real-world applications of psychology. About a third of our lives or so is spent at work. And if you think about organizations, we know that most of their problems have to do with people, and psychology provides really interesting theories and tools to not just understand people at work, but also help organizations unlock human potential, and of course help people thrive in their careers, and that really is where my passion is. My expertise has always been in creating data-driven tools, starting from psychometric assessments all the way to analytics, and of course, more recently, AI, that help organizations be more data-driven when they’re trying to, for example, assess potential. So imagine having a hiring manager interview you in 10 minutes and decide intuitively and subjectively whether they like you or not, kind of like a swipe right or swipe left option in the analog world, and then unleash their biases and make random decisions that land you in the wrong job, to everybody’s perils. The extreme opposite of that is to actually look at an individual’s past behavior, past performance, their psychological assessment results, and of course even use AI, artificial intelligence, when it comes to decoding how they behave in a digital interview. We’ve been working on the applications of AI to talent identification and psychological assessment for about 15 years. 

MOLLY WOOD: I mean, on the one hand, it feels like these things are disparate—AI and psychology—but it sounds like you’re saying they’re really not. How has the work that you do affected your perspective on AI and what it can do better?  

TOMAS CHAMORRO-PREMUZIC: First of all, I think if you want to really understand artificial intelligence, it’s a good starting point to get better at understanding human intelligence. Secondly, I think the big promise of artificial intelligence is to not so much surpass human intelligence, but to complement it. So I think, you know, understanding human intelligence has been really important, because if you want to understand how we structure language, ideas, knowledge, et cetera, you know, most of what AI is is profoundly inspired by the human brain and neuroscience. At the same time, we’re at this really, really interesting point in time where every organizational leader needs to wonder not just how they could leverage AI to be better at their job, to be more effective, but that how also they can future-proof their organizations and prepare their talent and cultures so that they can actually thrive in the human-AI age. So I think the human-AI age is the most, I would say, significant period in the last 30 or 40 years when it comes to the potential for progress, and of course, also, some of the risks that need to be mitigated.  

MOLLY WOOD: So how should leaders think about seizing the potential of the technology, but also limiting the risks? 

TOMAS CHAMORRO-PREMUZIC: The goal for AI or any new technology or innovation isn’t immediate perfection, but it’s long-term progress, which is mostly incremental improvements over the status quo. So AI doesn’t need to be perfect. It needs to be better than the status quo. AI is a work in progress, and we have a lot of opportunities to improve. Now, the risks are separated into two buckets. If we think about AI 1.0, a prediction machine, or machine learning, we have seen its main application, which is social media platforms or direct-to-consumer platforms or apps that we have. AI 2.0, if you like, is generative AI or AI as a production machine, something that automates the passage from insights to actions. I think it’s a really, really impressive and valuable tool, but if we don’t understand that the whole point is that with the time that we can save from boring and low-value and predictable activities, which might be 30 to 40 percent of a day’s work, the whole point is that that frees us up to then reimagine how we add value. We have seen a lot of data showing that generative AI has incredible adoption, organic adoption levels, in organizations, but guess what? The typical employee who is saving 30 or 40 or maybe 50 percent of their day, achieving the same output with less input, isn’t running to their boss saying, Hey, boss, I have 45 to 50 percent of my time free now, can you give me more work? It’s a big challenge for managers and leaders. And that, again, speaks to the important connection between artificial intelligence and human leadership.  

MOLLY WOOD: How should leaders manage for that, figure out where the value and the benefits lie in adopting AI? 

TOMAS CHAMORRO-PREMUZIC: The first, really, is to experiment, to not either ban AI because they’re afraid of it, or to actually invest really, really heavily on a top-down global AI tool platform, assuming that then next week they’re going to have productivity benefits, because both are equally mistaken, but actually to try it out, experiment, to share success stories, to also share its limits. That takes me to the second one, which is really to not see this as a solution waiting for a problem to be solved, but to be very problem-centric. Most leaders don’t need to completely reimagine their strategy because there is this thing called generative AI that has arrived and gone mainstream. What they have to think is whether generative AI or other versions of AI can actually be helpful in accelerating their strategy or translating their current strategy into execution. So, you know, being solution-agnostic means they’ll probably want to consider generative AI but not put all eggs in that basket. And the third one, I think, is about really learning from mistakes, failing fast, or as my colleague and friend Amy Edmondson says, failing smart, which is to create small, lean, agile, fast experiments. Or, basically, you structure relevant business problems, almost a scientific experiment, and you invite AI to be part of that solution, and then you measure the impact. And if you structure in a smart way, it means that even if you don’t get the result that you wanted, you actually increase your capabilities and increase your know-how. Most leaders, managers, organizations don’t need to become the number one technical experts in AI tomorrow, but it’s advisable that they shop around for expertise or that they develop some capabilities internally. In essence, Molly, the good news is that there’s nothing radically new about how to embed AI in the organizations vis-à-vis other technologies that happened before, even if AI is groundbreaking. And, of course, their adoption is always difficult. Change management is always a challenge. Everybody loves change until they have to do it. So I think there are only two ways in which you can get people to change. One is you force them. The other one is to win their hearts and minds. So it is important, then, that you sell the benefit to leaders and particularly mid-level managers who are where everything either makes or breaks. So if there’s one tactical recommendation for HR it’s invest more in upskilling and reskilling your mid-level managers because they hold the key to unleashing AI in your organization in a positive and strategic way.  

MOLLY WOOD: It feels like this upskilling and reskilling piece is really important. So you’re saying to organizations, focus on the outcome, the problem that you need solved, as opposed to the ideal happily-ever-after ending. But also, I think there is a tendency in organizations to say, We’re going to bring this tool and then you’re all going to be 40 percent more productive and then you’re going to do 40 percent more work and you’re going to love it. And it sounds like what you’re saying is, Be more empathetic than that. And if you’re going to give people more work to do, give them better work to do.  

TOMAS CHAMORRO-PREMUZIC: That is the key. We have never in the history of humanity, throughout our evolutionary history, we never, ever invented a technology to work harder, right? This applies to the wheel, to fire, to the dishwasher, the car, anything. Same with AI. We haven’t invented it to work harder, but we have invented it to work smarter and better. If you think about it, we have a wonderful opportunity to make work better and more creative, because so many things that we do, even among knowledge workers, are not dependent on our creativity or ingenuity and our intelligence. I can do this very well, even if it ends up being the intellectual version of fast food or a kind of microwave for ideas. The value is going to come not from what AI does, because that becomes commoditized, but from either interacting with AI in a unique way that makes us creative, or from reimagining how we add value in our current role, because, by the way, AI doesn’t really eliminate that many jobs; where it does eliminate entire jobs it creates many new jobs in turn at a faster rate. But what it does is it eliminates tasks within jobs, changing the skills constellation needed to add value. I don’t even think it’s about so much upskilling and reskilling, but incentivizing people to really harness and apply the skills that AI is unlikely to replace or master, things like emotional intelligence, human connectedness, critical thinking, understanding, right? Because AI is really good at explaining everything, sometimes without understanding anything, which of course, I know some humans are also very good at doing, but you know, we don’t like too many of those. [Laughter]  

MOLLY WOOD: You mentioned this phrase “microwave for ideas,” that AI could be a bit of a microwave for ideas. I just want you to define that a little bit more for us.  

TOMAS CHAMORRO-PREMUZIC: Yeah. So first, if you think about it, generative AI is amazing because it managed to automate output that is extremely creative—jokes, sonnets, poems, even things like, you know, the most creative or funny human, it would take them three years to get to something like that. And it can just churn it out and out and out and out. In a way, it’s the intellectual equivalent of a microwave for ideas because it gives you as many ideas as you want, really quick, almost reheated ideas because it’s taking what everything or the crowds or a specific group thinks about something and repackaging it. So it’s synthetic. And I think we’re going to use it, or we’re using it or should be using it, as a microwave. It’s convenient to use it all the time, but, you know, if you want to have some people over for dinner at your home and impress them, you’re probably not going to microwave a frozen meal that you picked up in the supermarket. The number of people who every day tell me, Oh, I have done this presentation and I did it with generative AI, and instead of taking me five days, it took me five seconds. Well, you can tell because it’s not that great, right? Probably 50 percent of my emails can be automated with generative AI. But if I really want to reach to you and tell you something meaningful, I better sit down and think about how I can connect with you. Not everything should be automated. For sure, generative AI automates a lot of our creative output. It also automates a lot of our mediocre output. And for that it’s great because we don’t want to spend time on stuff that is low value. 

MOLLY WOOD: You wrote a whole book about systemic problems in leadership and how the cream doesn’t necessarily rise to the top in all organizations. In fact, you put it pretty bluntly, the book is titled, Why Do So Many Incompetent Men Become Leaders? So do you think new technology can root out mediocre men, or mediocre leaders? 

TOMAS CHAMORRO-PREMUZIC: I think AI poses at least a double threat to mediocre men. And, of course, mediocre women, even though mediocre women are underrepresented in the highest echelons of organizational hierarchies, right? The biggest one is that AI is a really, really powerful and promising tool that could help organizations make decisions more data-driven, including, of course, promotion decisions and executive assessment and selection decisions, right? In a world in which AI helps organizations become more meritocratic and talent-centric, fewer, if maybe perhaps not any at all, incompetent men will rise to the top of those hierarchies and there will be a much smaller gap, and perhaps no gap at all, between a person’s individual career success and their ability to add value to an organization. So, in fact, my hypothesis, and it might be a little bit of a cynical conspiracy theory here, is that a lot of the backlash that we are seeing against AI is coming from those people. I know in the US the expression is that it would be like the turkey voting for Thanksgiving or Christmas or… if you are in charge of an organization and here comes a tool that has like an X-ray machine power to help people understand who really is adding value to the organization and who is actually managing up and operating in a very Machiavellian politically skilled and, you know, manipulative way, that’s a threat to incompetent men who are in charge. And the second one, of course, is that expertise is commoditized and disrupted by AI. It is much harder now for somebody who is mediocre to make stuff up or to actually even make a living giving advice or selling consulting to others, because right now, if you really want to show and convince others that you are an expert, you need to have deep expertise. There is a difference between spending five minutes on ChatGPT and thinking that you are an expert in medieval history because you read that, or spending five years studying that. It’s the combination between human intelligence and artificial intelligence that holds the key to progress.  

MOLLY WOOD: I do take your point about adoption, and I have wondered about the resistance and where you encounter that, because there is a question, I think, as we think about the future of work we have to ask what work is, and for a lot of people, it’s meetings, it’s summaries, it’s summaries of meetings.  

TOMAS CHAMORRO-PREMUZIC: I know, but I think just like, you know, my academic colleagues in the beginning were like, Oh my God, we should ban it because students are writing essays with these tools. I said, well, you know, a future for academia in which students write the essays with ChatGPT and academics grade them with ChatGPT isn’t that bad. Maybe then we can work out what valuable activities we can do instead, right? And equally, a future in which you produce your PowerPoint presentations with generative AI, and I have my AI reading them, or I use my AI algorithms to hire candidates who submit their CVs with AI, or I send my avatar or deepfake or copilot to a meeting and you send yours. All of that is fine, but let’s not kid ourselves. That’s not where the value is going to come from. The value will come from working out what we’re going to do with the 40, 50, or maybe even 30 percent of the time we actually save. Look, it’s no different from how technology automated even creative or artistic output in other fields, right? When the synthesizer appeared, it didn’t kill musical composers, but it gave a chance to some musical composers to invent electronic music and other types of music. When digital photography came, it didn’t kill professional photographers. At the end of the day, the difference between good and bad photography is not the equipment, it’s the interaction of human skills with the technology. 

MOLLY WOOD: Yeah, you need the soft skills and the technical skills to succeed, right? Okay, I want to ask you about growth next. Do you have some pretty specific advice about how leaders should think about incorporating AI and company growth strategies that includes a really data-led approach?  

TOMAS CHAMORRO-PREMUZIC: Yeah. And I think, well, first of all, AI has arrived as the latest stage in the evolution of digital transformation, which most large organizations underwent or are still undergoing, which is basically trying to become more data-driven. And I think partly because we don’t have enough data scientists to translate data into insights, we started using AI to automate that. And now, we are basically using AI to automate the passage from insights to actions. So I think three important recommendations. One, again, is to be problem-centered and to really measure what matters and see how well AI can help leaders and organizations improve on their relevant KPIs as opposed to, you know, no organization is in the business of showing that AI works or in the business of running experiments. The point is to solve useful problems. The second one is really to manage this human-AI interface, which comes from rehumanizing their cultures, making their cultures a relevant ecosystem for AI to be adopted and for AI to be leveraged, which, by the way, involves selling it to people, not demanding that they’re more productive and throwing it at them. And then the final one, of course, is to be ethical and to only implement AI that is ethical by design. The good news and the advantage is that most models, most frameworks, most parameters look very similar. If there is transparency, if there is informed consent, if people opt in, if you protect their data and data is confidential and anonymous. And fundamentally, if there is a benefit for the user, the risks are minor, as Gartner’s adoption curve always shows, we might be over slightly the hype phase, things are settling. And at this stage, we can start to expect real face of maturity and real productivity gains to kick in. 

MOLLY WOOD: If you had to pick one leadership skill that’s going to become 10 times more important in the age of generative AI, what would it be?  

TOMAS CHAMORRO-PREMUZIC: Coachability. I think even if you’re a great leader, a leader who is a finished product, is finished, and, regardless of how talented you are, what will make a big difference in the next five or 10 years is your willingness to change and get better. And I think people differ in their coachability, but mostly we can all trigger or incentivize ourselves to be more willing to change and get better. More and more what will matter is your potential, not your past performance and to augment your potential, you need to be coachable. And that means, by the way, being open to feedback from others, listening to what you need to hear not what you want to hear, not surrounding yourself with people who suck up to you and tell you what you want to hear, and actually go outside your comfort zone and really see yourself as somebody who is still to be molded or sculptured and somebody who needs to change and who is very much an unfinished product. So I think coachability, which, you know, I think is a lovely skill.  

MOLLY WOOD: Author, professor, and Chief Talent Scientist at Manpower Group, Tomas Chamorro-Premuzic. Thank you so much for the time today. This is outstanding.  

TOMAS CHAMORRO-PREMUZIC: Thank you for having me. 

MOLLY WOOD: And that is all for this episode of WorkLab. Please subscribe if you haven’t already and check back for the rest of season 7, where we will continue to explore how AI is transforming every aspect of how we work. If you’ve got a question or a comment, please drop us an email at worklab@microsoft.com, and check out Microsoft’s Work Trend Indexes and the WorkLab digital publication, where you’ll find all our episodes along with thoughtful stories that explore how business leaders are thriving in today’s new world of work. You can find all of it at microsoft.com/worklab. As for this podcast, please, if you don’t mind, rate us, review us, and follow us wherever you listen. It helps us out a ton. The WorkLab podcast is a place for experts to share their insights and opinions. As students of the future of work, Microsoft values inputs from a diverse set of voices. That said, the opinions and findings of our guests are their own, and they may not necessarily reflect Microsoft’s own research or positions. WorkLab is produced by Microsoft with Godfrey Dadich Partners and Reasonable Volume. I’m your host, Molly Wood. Sharon Kallander and Matthew Duncan produced this podcast. Jessica Voelker is the WorkLab editor. 

Microsoft is a technology company, a small local company, with few employees, no offices, and almost making no profit… >>

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.