Topics
Artificial Intelligence and Business Strategy
The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.
In collaboration with
More in this series
Alessandra Sala, senior director of data science and AI at Shutterstock, brings an impressive background in responsible AI to her role. Also the global president of Women in AI and cochair of the Women4Ethical AI platform at UNESCO, Alessandra joins this episode of the Me, Myself, and AI podcast to describe how Shutterstock, widely known as a stock photo company, has become a go-to destination for creative assets — and AI training data.
Alessandra outlines Shutterstock’s content acquisition and royalty models, which reward contributors whose assets are used to train third parties’ AI models and have set the standard for other stock media companies. She argues that these ethical approaches aren’t just a moral choice — they offer a strategic advantage, given that these assets are integral to shaping the future of AI-generated content. Learn how Alessandra’s team is leading the charge in ethical AI and redefining the creative landscape.
Subscribe to Me, Myself, and AI on Apple Podcasts or Spotify.
Transcript
Sam Ransbotham: Stay tuned after the episode as Shervin and I discuss the key points from today’s guest.
Stock photos, videos, and multimedia are a great resource for many. They’re also useful training data for AI. Today, we speak with a company focused on ensuring that they develop creative asset libraries fairly and equitably.
Alessandra Sala: I’m Alessandra Sala from Shutterstock, and you’re listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of analytics at Boston College. I’m also the AI and business strategy guest editor at MIT Sloan Management Review.
Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities, and really transform the way organizations operate.
Hi, everyone. Today, Sam and I are happy to be joined by Alessandra Sala, senior director of artificial intelligence and data science at Shutterstock. Alessandra, welcome to the show.
Alessandra Sala: Thank you, Shervin. It’s my honor to be here with you and Sam.
Shervin Khodabandeh: I think many of our listeners know about Shutterstock, but for those who might not, just maybe quickly explain to us what Shutterstock is.
Alessandra Sala: Shutterstock is much more than just a stock photography company. It [has] evolved and transformed so much; Shutterstock has become the go-to platform, from giants to startups, to train AI models with our library of 825 million creative pieces. And I’m talking about images, I’m talking about video, 3D models, even music.
And the most interesting aspect to me is this creative library — and we’ll talk possibly even further in the context of AI — has been collected through a very structured process over 20 years, where every single piece gets reviewed and the standard of quality is very high, but it comes from 3 million contributors from across 150 countries around the world.
I’m leading the design, the measurement, really the building of AI applications to provide value to our customers. Our AI operation, it’s quite broad. AI products touch pretty much every single experience that you can find at Shutterstock.com today. When you search … we don’t only use an ethical approach when we build product; we also use a very ethical approach through our employees to try to upskill. So I have even been involved in lectures and tutorials, explaining AI to our employees: how we use [it], where we develop [it], and how they can get exposed to this field.
Shervin Khodabandeh: Beyond Shutterstock, you’re taking leadership roles across a number of organizations. If you could, just quickly maybe describe your leadership roles across multiple places that you’re involved in.
Alessandra Sala: I’m the global president of Women in AI, which is a nonprofit organization in over 150 countries around the world, and I’m also the cochair of [the] Women4Ethical AI platform at UNESCO. And, most recently, I’m chairing a standardization collaboration across a number of different standards organizations, like ITU, ISO, and IEC, which are looking at AI and multimedia, trying to create a collective set of standards to combat deepfakes, to strengthen copyright infringement, watermarking solutions … so, to make AI safe.
Shervin Khodabandeh: Years ago, as an amateur photographer myself, I came across Shutterstock, whether it was to contribute images or to use some images for my own work. So I think maybe if you go back a decade or so, it was the place to go and license images. But I think what you’re saying is, now, you are using these 800 million-plus images as a valuable source for training AI models. Tell us more about that and why that’s so important.
Alessandra Sala: There are a couple of things that we want to highlight. So the first one is, how do we develop this trove of creative assets that possess amazing quality? It’s because over the years, photographers and artists have given Shutterstock content, and we have reviewed the content for quality, for IP [intellectual property] infringement. And therefore, today, with this massive trove of creative assets at our disposal, we have a new trustworthy and ethically sourced set of data.
And when I say “ethically sourced,” it’s very important in the context of AI. We have seen so many companies out there grabbing content from the internet. Technically, it’s called scraping, which really means taking everything and building those giant AI models. But unfortunately, we have an important problem of plagiarism, violent content, [and] bias in this data set that is uncontrolled.
I want to give you a stat: If you were just spending one second of your time on an image, it would take you 44 years of your life to see 800 million assets. So when those models are trained on billions [of] data sets, no one has seen every single example. And therefore, going to a place where you know the data set has been collected over years, reviewed [using] quality standards, moderated against violence, nudity, and all the things that we know we don’t want in our model, then Shutterstock has placed itself as a go-to place for any player that wants to build a safe generative AI model.
Sam Ransbotham: When I first heard about generative AI, naively I think my assumption was “Hey, we don’t even need a Shutterstock anymore because we can have these models magically generate things.” But obviously, that’s naive because these models are built off of so many images of the past. But if I think about Shutterstock, being ethical and trying to compensate the people who are creating the images is really an important part of your business model, which is appealing to me. But I wonder about all those other companies that maybe won’t be as ethical as Shutterstock. How does the world shape out for Shutterstock when you’re trying to behave ethically and do the right thing by the creators and the artists, but other people are not? Aren’t you somewhat competing at a disadvantage? How’s that going to work out?
Alessandra Sala: That’s a great question, Sam, and I would say it’s now turning into a big advantage for a couple of reasons. So for those that may not know, let me talk just one second about the Contributor Fund. We sell those data to big AI companies or startups to build their own models, and we share revenue with our contributors, but we do even more.
Every time we generate an image on Shutterstock.com with our generative solution and we sell that image, we share royalties with our contributors. So it’s a perpetual model that provides them … Even if every transaction is very small, over time, if those solutions scale and become billions of interactions every day, then the revenue will be important and will be significant.
And as you said, are you only doing it because being ethical means keeping the contributor in the loop and providing them a meaningful path for work, or is there even something more? And I’m happy to share that there are some very recent publications that demonstrate that the models that are built on synthetic data start to degenerate very fast. So it’s not only to be ethical and keep your contributors in the loop — contributors are actually necessary for our business moving forward.
Let’s provide an example that will resonate with everyone: before COVID; after COVID. Before COVID, we had no images of masks [or] personal protective equipment. If we didn’t have contributors and we relied only on AI-generated technology, maybe it couldn’t have been possible to generate the massive amount of images that we now have around all the possible COVID concepts and masks and PPEs. So, what I’m trying to say is, we are ethical by design, but we have also demonstrated that by being ethical, there is a much larger return.
Shervin Khodabandeh: Yeah, it’s not just being ethical, which is, of course, quite important, but it’s also being effective. Because, as you’re saying, the quality of these models over time is going to become really, really important, and we’re actually seeing that in some of the images generated by AI where everything looks good, but maybe the hands all look weird, or certain anatomical parts, because is too much is generated. So I think you’re making an interesting point.
The other thing I’d like to get your reaction on is, adoption of generative AI has been hindered by many things for many companies, many of them technical. But the legal, ethical, and responsible AI considerations have been really, really a major worry for many companies, with all the points you mentioned: copyright infringement, unsafe content, etc. And again, many companies that are using generative AI to create content — images, text, video — for a variety of purposes have been navigating generative AI quite carefully. Have you mitigated that risk, pretty much, when it comes to content generation from your data?
Alessandra Sala: So I’d like to take a little bit of a temporal perspective and walk us through from where we are coming.
Our ethical approach didn’t start when generative AI boomed. There was an investment that [dates back a] few years. So in 2021, I was the only industry representative at UNESCO to support the ethical recommendation on AI that UNESCO published [and was] approved by 194 countries in the world. [It is] the largest recommendation document that has so many countries standing behind it.
We then translated those high-level principles into practical aspects. We launched, last year in November, our TRUST framework: five pillars that are tackling, from the training [perspective], how do you leverage ethically sourced training data? Royalties that we have discussed: How do we compensate our artists and give them instruments — even artists that are not part of our network yet — to become part [of it] through their different lenses? [Uplift] — bringing opportunity to diverse photographers around the world to give the diversity that sometimes is lacking in those large collections of data. And then safeguards [and transparency]: Going back to your question, how do we make it commercially safe?
So “commercially safe” means a few things in our world and in our incarnation. So first of all, we have had a large investment from our company to create safety mechanisms that control attempts to generate violent or nonsafe content, and we stop it. We provide licensing protection to our customers. And when they need our full indemnity protection, we review that content. So not only [do] we generate and sell it; if it requires safety and control mechanisms, we have reviewers that can validate that that content is safe, doesn’t infringe on any … landmark, copyright, trademarks. And it can be attached with the quality certificate that Shutterstock provides to their customers. It’s a strategic investment, and it’s a very practical translation of high-level ethical principles into practice.
Shervin Khodabandeh: I love that. I would say, in many ways, you are pioneering an approach to create content with generative AI in a safe and commercially risk-free or risk-managed way. I’m curious about your thoughts on [its] application to other domains, like maybe music or text or other forms. Do you think there is something for others to learn from this approach?
Alessandra Sala: So I love your “risk managed” statement because it’s exactly that. Generative AI that we use today, it’s not the same that we started [with] two years ago. It’s not the same that we’ll use in one year. And the users are evolving, are learning how to interface with this technology — how to interact, how to create in our ecosystem with this technology. And therefore, they will teach us new ways to explore strengths and new ways to open up weaknesses.
We have been the first in our space to launch ethically sourced data and our TRUST framework that underpins our ethically safe product, which brings front and center this royalty framework to pay contributors. And then, all other competitors followed. The industry recognized that it was the right path forward.
On the other [hand], I’m sure people know about the Content Authenticity Initiative, which is a consortium that is the open-source implementation group behind a standards organization, the C2PA, which is the Coalition for Content Provenance and Authenticity, which is led [by] and [has] big involvement from our competitors.
We have joined them because we understand the value of demonstrating what is an authentic creation versus what is AI-generated content and certifying this content through standard rules and protocols that we all contribute in this industry. So it’s a very open industry where we provide new opportunities for exploring new business models and so on, and we take from our competitors an opportunity where we can bring the innovation to our business.
Sam Ransbotham: So let’s go back; I like the royalty [model]. I’d like to know more about how that works. I think people will be interested in that. If I think about music, we had the sampling [trend start] years ago. You know, if you sample a certain amount, you’ve got to pay some royalties, and there are some agreements that work out. But when you’re talking about Shutterstock, these are very, very micro samples. Can you explain a little bit of the details about that royalty?
Shervin Khodabandeh: That was my question, too.
Sam Ransbotham: It seems complicated.
Shervin Khodabandeh: You know, Sam and I, among another 3 million contributors, are contributing images, and if an image is generated that’s maybe taking 3% of my content and 2% of Sam’s, how does the attribution work here?
Sam Ransbotham: It seems really hard.
Alessandra Sala: Yes, attribution is very hard. At the beginning, we didn’t even know. We put money aside, but we didn’t even know how to distribute it back to our contributors because it is a complicated problem.
So in the industry, there are three approaches. The Shutterstock approach [is], we contribute to everyone, depending on how many assets you have given [us]. For example, I’ve only given one asset and Sam has given us his life’s collection. Sam deserves a little bit more than me, right? Because he’s teaching this model through everything that he has created in his professional life. So it’s volume-based.
There are two [other approaches] in the industry, which is, one, I only compensate if I’m generating a cup and Sam has given me images of a cup; Sam will get a contribution. Alessandra only gives me people swimming, so no contribution. I fundamentally disagree, from a technical standpoint, because the model and the standard … A cup is a cup and swimming equipment is swimming equipment only if it has the comprehensive understanding of all the concepts in the world.
The third model is based on popularity. So if you have given me an image that is so valuable on the marketplace, I have to compensate you, Sam, more than my image. I again disagree with that through the lens of diversity. We know that sometimes what’s popular is not the full representation of our world, our culture, our preferences. And those niche areas have a lot of value. And indeed, in the past few years, we have seen the movement toward diverse content and, therefore, by only looking at what [is] popular, [the] rich will get richer. We only pay the contributors that are already gaining a lot of money from our marketplace.
We should be able to distribute wealth to everyone, to those that give us content of quality. And therefore, we have some amazing leadership and strategic thought leadership discussions with our CTO, and [we are] thinking of new models for attribution, where quality is front and center. And by “quality,” we mean aesthetically pleasing content, diverse content, unique content, safe content — so a very general definition of quality. And maybe those attribution models will evolve. I’m sure they will evolve in the future, but those are the ones that are used today.
Sam Ransbotham: But the academic in me has to push back a little bit because if I think about your example of you and your swimming images, which may be beautiful, and my cup images, which may be really ugly, I may just flood your market with lots and lots of cup images that are low quality. I can see why similarity is a problem. I can see why popularity is a problem. But also, I can see some trouble with volume, too. It seems really hard.
Alessandra Sala: Sam, I love that because, yes, that’s exactly the issue when you attribute based on volume. However, what we have done … we have an amazing service that is able to discover duplicates or things that are quite similar. And therefore, we try to have more unique assets. If we have 200 million strawberries, how many strawberry images do you really need your model to learn from? And therefore, there is a quality selection and a duplication screening so that the catalog, the collection that we actually use for training, has less of those.
Shervin Khodabandeh: I wanted to ask you about something in not-so-distant a future. That is, the boundaries of technology and art are being challenged. I mean, we’re seeing content — text for sure, images, videos — generated entirely by AI that are quite effective in eliciting emotional responses that a magnificent piece of art would. What do you think the future of art and content creation looks like five or 10 years from now, with AI and humans, and oftentimes AI, doing really amazing stuff that’s beyond a human’s imagination, even?
Alessandra Sala: Absolutely. I also use our AI generative tool to create things that I would never be able to. And I think there is an interesting segmentation that we need to do for people unable to create, to generate, or produce a beautiful creation, like myself, that is empowered through those tools [to] a great extent. The empowerment that we give to professional artists and photographers through those tools may very much not fit into the final creation or the full creation but more into their creative process — how to get inspired, how to try a few ideas without creating a photo shoot that we know takes so much logistics and time and money but they can stage. They can prepare possible outcomes of what they want to realize and then only use the one that resonates most with their commercial outcome or opportunity. So I think different people will be empowered through this technology at different levels.
I don’t think we have seen where this technology can bring us, but I believe humans have always evolved with every single tool that we have [been] given. Let’s think about airplanes. Our human ability doesn’t allow us to move through space at that speed — [it] possibly will never [happen]. But we have used aviation as an instrument to empower our society to reach new goals. And I believe AI will similarly empower us to a level that we don’t have yet and get to a new height.
Sam Ransbotham: Yeah, that makes a lot of sense. If I go back … actually, I spent the summer looking at some cave paintings at the Tito Bustillo caves, which were pretty fascinating, and I was thinking how different those are from our modern images. And those people had to really work to get something that looked vaguely like a horse onto the wall of a cave. And now, you know, it’s a matter of just saying, “Hey, show me a horse.” And then you say, “I don’t like that horse. I don’t like that horse. I don’t like that horse.” I think this is really a fascinating transition when technology’s enabling a different set of skills.
And Shervin, we saw this with Tye Sheridan in the Wonder Dynamics [episode]. They’re trying to make CGI available to the masses instead of just the big companies. It seemed like exactly what’s happening here. You’re really dropping the cost of production, which really is going to make creativity much more valuable — or I hope it is. It’s going to make human creativity more valuable. I think that’s a pretty optimistic take on that. I hope that’s true.
Shervin Khodabandeh: I hope so too.
Alessandra Sala: I hope it’s true too.
Sam Ransbotham: Let’s transition a little bit. We have a segment of rapid-fire questions. We just want you to answer with the first thing that comes to your mind. What’s the biggest opportunity for artificial intelligence right now?
Alessandra Sala: Empowering humans into our next evolutionary step.
Sam Ransbotham: OK. What’s the biggest misconception that people have about AI?
Alessandra Sala: That it can replace our know-how. I think there is a lot that needs to be controlled in this technology in order to empower us.
Sam Ransbotham: OK. And what was the first career that you wanted?
Alessandra Sala: I wanted to be a mechanic and a pilot, and I still love car racing.
Sam Ransbotham: Good. When is there too much artificial intelligence?
Alessandra Sala: I don’t think there will ever be too much. It’s like saying “How much is too much air?” I can breathe. I can keep breathing.
Shervin Khodabandeh: I love that.
Sam Ransbotham: Most of the time, people can always think of something that’s too much. Actually, I love the optimism. Is there something that you wish that artificial intelligence could do right now that it can’t currently do?
Alessandra Sala: No, I wish we had more governance structure around this technology, like in the aviation space; it’s an environment where it is safety first. I think we need to move our innovation hat into safety first as it’s reaching millions of people every day.
Sam Ransbotham: Yeah, the effects — as we’re reaching millions of people, the mistakes propagate very, very quickly, don’t they?
All right. Well, thank you for taking the time to talk with us. I’d never really thought so much about how these models would work and how the royalty models would work. I hadn’t really thought about how important things like drift would be. The COVID mask example, I think, was a great example of how the world can cause models to come out of sync really quickly, but also this idea of model collapse, too — that these models that just build off of self-generated images — that’s going to be a problem, too. And I really hadn’t thought about a lot of these difficulties until you brought them up. Thanks for taking the time to talk with us today.
Alessandra Sala: My pleasure.
Shervin Khodabandeh: This conversation was quite illuminating, and what I found interesting there was, there was a saying, maybe 10 years ago, 15 years ago, that data is like crude oil, and data science is a refinery that makes it useful, right? Which is fine, but in that context, what was data? It was customer data, transaction data, behavior data, elementary data — that kind of stuff, right? But it was numbers, more or less numbers.
Sam Ransbotham: Tabular numbers.
Shervin Khodabandeh: Tabular numbers or, even, fine, text and unstructured data. It wasn’t video, and images, and songs, and music. What these guys have done is basically said, “No, clearly Shutterstock had a business model. We have images, we have videos, we have content; we license it to you so you don’t have to go create that yourself.” Great. That’s still a value proposition. But now, we have all this data — hundreds of millions of images — that can be the oil that the generative AI is training on to now create content.
And imagine an advertisement for a diamond company or a jeweler, where a man is about to propose to the woman. They are rock-climbing this, like, 8,000-foot vertical climb. And right before they make their last jump to get there, the man is like, “Oh, wait” — and he hangs from the harness, he reaches out, takes out the ring, and is like “Will you marry me?” And there is an aerial view, there is a zoom-in to his face, to her face. All the way down the abyss.
Sam Ransbotham: To the ring falling.
Shervin Khodabandeh: To the everything, right? To the ring falling, and her catching the ring, and him catching her. Imagine producing that. It would be impossible. It would cost hundreds of millions of dollars, and it would be called “Mission Impossible” if they did it. This can be done.
Sam Ransbotham: Right.
Shervin Khodabandeh: Not that well now, but very soon, this can be done with somebody sitting by their laptop on a couch and just typing the context and without a single producer or actor or cameraman or location scout. That’s amazing.
Sam Ransbotham: And that ties to the Tye Sheridan idea of Wonder Dynamics, [who] we talked [to] last season — that, you know, it’s becoming accessible to all these people. And, you know, the other part of Shutterstock that I thought was interesting is that they recognize the benefit from that oil — that that oil is not synthetic oil. We have real oil, and we have synthetic oil. There was a great paper out in Nature last month about how these models are collapsing because they’re ingesting synthetic data.
Shervin Khodabandeh: Yes.
Sam Ransbotham: And, actually, I think Ross Anderson was one of the authors on that; he’s just passed away. He was a great figure in the computer security community. But the paper was about models collapsing. And I think Shutterstock is in an interesting position, where they’ve got real oil, not synthetic oil, to complete your oil analogy. And that feels unique in our world, especially when the volume and the label’s there.
Shervin Khodabandeh: That’s right. That’s right. Although, for my own sake, I hope synthetic oil works as good as real oil, because …
Sam Ransbotham: Because that’s what’s in your car?
Shervin Khodabandeh: I still like internal combustion engines.
Thanks for listening today. Next time, Sam and I speak with Andrew Rabinovitch, vice president and head of AI and machine learning at Upwork. Please join us.
Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn specifically for listeners like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.
“The MIT Sloan Management Review is a research-based magazine and digital platform for business executives published at the MIT Sloan School of Management.”
Please visit the firm link to site