The launch of generative AI represents an era-defining moment. Never before have so many people become so excited by a technology program. Within five days of ChatGPT’s release in November 2022, more than 1 million people (including me) had logged on to try it out. If financial investments are a predictor of growth, then the $12 billion invested in generative AI in the first five months of 2023 shows the depth of commitment.
To get a sense of current momentum and direction, I ran a research webinar last month on the impact of generative AI on the workplace. Some 260 executives from organizations in Australia, Europe, Japan, and the United States joined me to describe their current use of generative AI in human capital domains. What I heard is that generative AI is a top priority for many CEOs, experiments abound alongside ambiguity, and organizations are already beginning to use artificial intelligence to augment their human capital strategies. I’ll share more details about those findings in a moment.
I wanted to hear directly from practitioners because it seems to me that, as with the previous era-defining event — the COVID-19 pandemic — and its impact on the workplace, imagining what is ahead with this new technology is a tough call. As with the emergence of hybrid work and work-from-home initiatives in response to the pandemic, figuring out the right approaches to generative AI is a process replete with ambiguity, experiments, and changes of mind. In other words, it is a learning process driven both by the initiative of individuals and the strategies of organizations.
Like the response to the pandemic, debate and action around gen AI are moving fast.
Figuring out the right approaches to generative AI is a process replete with ambiguity, experiments, and changes of mind.
Two facts are emerging. First, this is a rapidly developing technology. The size of investment gives a sense of this. So, too, does the volume of experimentation: One executive told me, “We have created a head of generative AI with a role simply to moderate and make sense of the hundreds of experiments we have running on any day.” Second, unlike previous workplace technologies, generative AI is not being positioned as a substitute for routine tasks — either analytical tasks, like keeping records or providing repetitive customer-oriented services, or manual tasks, like picking and sorting products or performing routine assembly. Instead, it’s a technology with the potential to hit at the heart of nonroutine analytical work. This is knowledge work, such as forming a hypothesis, creating content, recommending medical diagnostics, or making a sales pitch. The source of this impact is generative AI’s growing capacity to understand natural language. Around 25% of the total work time of knowledge workers is spent on the kinds of creative tasks that generative AI is beginning to get good at.
Yet, like during the early days of the pandemic and hybrid work, much is currently conjecture. There will certainly be pushback, changes of direction, and believers and resisters, just as there has been throughout the ongoing debate about hybrid work. There will be many experiments before leaders begin to develop their own narratives about how best to support their workers in these complex and ambiguous times.
Experiments, Projections, and the Great Ambiguity
The potential for generative AI to very quickly have a real impact in the workplace is fueling the attention it’s getting. Consider the McKinsey report on generative AI published in June 2023. McKinsey asked a group of technology experts at what point they believed generative AI would substitute for a number of human tasks. It then compared those estimates with pre-gen AI projections made in 2017 about when human performance might be achievable by technology. What is striking is the extent to which their estimated timelines have contracted. For one work task, for example — “coordination with multiple agents” — the experts in 2017 estimated a timeline of substitution that ranged from 2035 to 2058. Today, they estimate that happening from 2023 to 2035. That feels very soon. As important, in 2017, the estimate for when we’d see technology achieve human-level performance for the task of “creativity” was 2030 to 2045; today, the projection is from 2025 to 2030. These dates are for tasks where technology performance is expected to match median human performance. For top-quartile human performance, the timelines are pushed out somewhat.
How do we make sense of this? What I learned from observing the impact of the pandemic on work is that watching closely and asking questions of people in the field about their current lived reality is a useful practice. So my Sept. 19 webinar was essentially a listening process. Many executives came from human capital functions, so their interest was primarily in the people side of their business.
I began by asking where generative AI currently sits on the leadership agenda. Over half of attendees (52%) agreed with the statement “It is a CEO priority where leadership teams are discussing impact.” Only 13% said “It is not yet on the agenda in any meaningful way.” This is a topic that is definitely consuming bandwidth at the top levels of organizations across the world.
A useful practice is to watch closely and ask questions of people in the field about their current lived reality.
Generative AI is at the top of leaders’ agendas — but what about the scope of debate? What issues are coming forward as people talk through its consequences? For the webinar, I focused on a central aspect of this question: machines and creativity. The technologists in the McKinsey panel estimated that human creativity will be achieved by generative AI within the time span of 2025 to 2030, but what did this group of leaders believe? This, for me, is an important question because the McKinsey panelists are technology experts. I have a hunch that people like them will overestimate technological progress, whereas people like me (psychologists) will overestimate human progress.
What is fascinating is the lack of agreement among webinar attendees about whether they think human creativity at the top quartile will be achieved by gen AI from 2023 to 2030: Just 10% strongly agreed, while 34% agreed, 22% neither agreed or disagreed, 30% disagreed, and 4% strongly disagreed. I ran a Menti (a quick survey at the website Mentimeter) to collect participants’ real-time thoughts and asked why they had come to their conclusions. Within seconds, more than 80 comments had emerged: Those who agreed pointed out that creativity can be broken down into logical steps; that creativity sits on a lot of data; that prompts will be more important than AI; and that use cases, such as songs, already demonstrate creativity. Those who disagreed described how generative AI would augment, not substitute for, creative people and asserted that creativity was about empathy and vulnerability, that humans can generate new ideas, while AI pulls from the past, and that governments will provide regulation. Those who did not know said there are simply too many unknown unknowns.
For me, the takeaway here is the sheer richness of conversation and variance in opinions about generative AI. These are debates we need to be open about and continue to have.
I wondered, of course, how generative AI is currently being experimented with and used. My interest is in the human capital agenda and how generative AI will impact the everyday experiences of workers. One of the “promises” of AI, according to the McKinsey report, is a productivity boost from the automation of a wide variety of activities. For knowledge workers, this will come by technology either substituting for or augmenting tasks such as creating connections, retrieving stored data, personalizing content, automating activities, increasing the speed of response, and collaborating.
I asked about the current velocity of the use of generative AI in three human capital domains: talent development (like recruitment, induction, and career management), productivity (like skills training and managing collaboration), and change management (like internal knowledge management). I was interested in the current use of generative AI in each domain and the use cases. Of course, the data set of attendees comprised self-selected people who had an interest in generative AI, but still, what is clear is that there are many, many experiments currently underway.
The human capital domain with the greatest reported use of generative AI right now is “internal knowledge management.” Forty-six percent of attendees said they are experimenting in this sphere, with use cases such as creating communications, conducting market research analysis, finding resources and competencies, and engaging in employee listening. Almost as many attendees (44%) reported using gen AI for recruitment, with use cases such as creating job advertisements, developing simple approval flows, and onboarding new hires. About a third of attendees (34%) said they’re using gen AI for skills training, with use cases that include self-directed learning, the creation of learning content, and virtual reality-based training. And 23% said they’re using the technology for assessment and feedback, with use cases such as developing bespoke assessments, improving performance feedback, designing career paths, and creating chatbots for employees.
I feel that what I captured in this research webinar is a sense of the sheer depth of conversation and experimentation taking place right now across the world. How to think about and use generative AI is being debated in the C-suite as use cases are being created and tough questions (like the relationship between machines and creativity) are being debated. For organizations to be aligned with this emerging narrative, it is imperative that they experiment freely and observe closely.
“The MIT Sloan Management Review is a research-based magazine and digital platform for business executives published at the MIT Sloan School of Management.”
Please visit the firm link to site