You are currently viewing How Should we govern Meta for the benefit of Society

On May 16 of 2023, Sam Altman, the CEO of OpenAI, did something that no tech CEO had ever done before. He implored the US government to regulate his industry. One member seemed to speak for the rest in saying “The time for self-regulation is over. It’s time we legislate to hold you accountable.” Could he be right? Today on Cold Call, we welcome senior lecturer, Jesse Shapiro, and special guest, Julie Owono, to discuss the case, “Independent Governance of Meta’s Social Spaces: The Oversight Board.” I’m your host, Brian Kenny, and you’re listening to Cold Call on the HBR Podcast Network. Jesse Shapiro is a visiting professor of business administration at Harvard Business School. He’s in the entrepreneurship unit. Julie Owono is the Executive Director of Internet Sans Frontières and an inaugural member of the Facebook Oversight Board, and she’s the protagonist in today’s case. Thank you both for joining me today.



Jesse Shapiro:

Excited to be here.

Julie Owono:

Thanks for having me.

Brian Kenny:

Great to have you both here. I know our listeners will really appreciate hearing about this. So thanks for being here to talk about it. Jesse, I’m going to start with you by asking you what the central issue is in the case, and what your cold call is when you start the discussion.

Jesse Shapiro:

So the way I look at it, the central issue in this case is that Facebook, now Meta, is a for-profit corporation, that, nevertheless, has very wide ranging social impact. And the question is, how does this for-profit corporation govern that social impact in a responsible way? And we usually like to start the discussion in class by asking a really easy cold call, which is, how do you use Facebook? And as you probably know, Brian, the company, Facebook, was founded not that far from where we’re sitting, over in Kirkland House across the river. And initially, it was used by college students to socialize and date and things like that. And still today, our students will often give answers, like they’ll use it for planning events or staying in touch with friends, sharing birthday photos, things like that. But we use that as a jumping off point to draw contrast with some of the use cases that we highlight in the case. For example, uses by journalists, whose work relies very critically on social media as an outlet for communicating what they’re doing.

Brian Kenny:

And I like to remind people too, that, even though a lot of young people, I’ll use my children as they’re 20 somethings as the example, they don’t use Facebook anymore, but Facebook is still the elephant in the room. It’s an enormous platform. Is that right?

Jesse Shapiro:

Absolutely. Yeah. Facebook is incredibly important, both as a business, but then, also, as a platform for communicating. And in the case, we get into talking about the fact that Facebook has all kinds of potential ramifications for society, as a means of communicating journalistic findings, as a way of coordinating social movements, so lots of uses that are distant from those that our students will usually come up with when we ask them the cold call.

Brian Kenny:

Yeah. Why did you decide to write the case, Jesse?

Jesse Shapiro:

I think, as you mentioned in the introduction, the public conversation around social media emphasizes many things, like the fact that people communicate with other people who are like them, that social media may have the potential to create things like echo chambers. But a lot of those things have actually been with us, that is to say been with humanity, for a really long time, at least as long as we’re aware. One of the things that I think is really new about social media is the fact that that communication is mediated and that it’s centrally governed by a for-profit corporation headquartered in the Bay Area. I think that is something that is really unprecedented in the history of human communication. And so, I think it’s something that is more new than is appreciated and something that I think business leaders and social leaders need to really think about.

Brian Kenny:

Yeah, we hear a lot about AI these days. I mentioned it in the introduction, and I think a lot of people assume that tools like AI are going to really make this whole task of monitoring content much easier and much more straightforward. I’m wondering if you have any thoughts about whether or not that’s true.

Jesse Shapiro:

Well, of course, and I’m sure Julie knows a lot more about this than I do, but of course, at the scale at which Meta is removing content, more than a billion pieces of content a quarter, more than a hundred million pieces of content, even if you exclude things that are sort of obvious spam, there’s no way for a human to take a look at all of that and make decisions. And so, of course, machines are going to do a lot of the on the ground proximate work, just like spam filters have been doing that for us since time in memorial with email. But at the end of the day, what the machines are going to do has to be governed by rules, and there have to be people who make decisions about how to trade off different values when they come into conflict. And I think that technology’s not going to change that. That’s going to be something that we, as a society, or we, humans, have to make decisions about.

Brian Kenny:

Yeah, that’s a great point in the conversation to turn to the Oversight Board and to over to you, Julie. Again, thanks for being here with us today. We’re beaming you in from somewhere. So thank you for beaming in, but I’m wondering if you can tell us a little bit about the Oversight Board, why it was created, and what it’s intended to do.

Julie Owono:

Yes, thanks so much, Brian. I’m, first of all, extremely glad to be here. And what Jesse has just said about the need for rules is a great introduction to actually remember where we’re coming from and when the board, the idea of the board, started emerging. If you remember, in 2018, we all woke up in a horror, finding out that a platform that we all use, Facebook had been used to commit a genocide against Rohingya populations in Myanmar. That, for me, was a terrifying wake up call, a wake up call, because as a free expression advocate I had to come to the realization that, yes, we do need rules, we do need limits, but then, who is going to set those limits? And when the conversation started emerging about setting those limits and who should be accountable and how should we make sure that accountability is real? The first response that came pretty much worried me. The first response was having the CEO of Facebook, Mark Zuckerberg, being grilled at Congress. It was not directly related to the Rohingya situation. It was related to another terrible situation, which was using Facebook to corrupt an election or corrupt a referendum. So the responses that I heard coming in from the CEO of Facebook did not reassure me at the time. And there were other loud voices in the digital rights space that I know very well, because I’ve been an advocate for the past 15 years, I want to say. The loud voices in the space started saying, “Okay, somebody needs to make sure that whatever Mark Zuckerberg is telling us right now is exactly what is happening and there is some form of oversight or at least some form of accountability.” And that’s when we first started hearing seriously about making sure Facebook could be held accountable. And then, very rapidly, Mark Zuckerberg himself, posted a publication on his Facebook profile. He also advocated for the same thing. He said, “I should not be the one making the decisions. I totally agree with that.” And later on, the company hired a person in charge of human rights.

And I have to say, five years in, there’s still a lot to work on. It’s still work in progress. But I’m confident that, finally, from a historic perspective, when it comes to communications and making sure that communications do not disrupt the peace in our societies, do not disrupt the democratic principles that we care so much about, I am confident to say that probably the Oversight Board is one aspect of the solution that we’ve been waiting for, but of course, I’m totally not biased.

Brian Kenny:

That’s okay. That’s why you’re here. And I’m sure it’s been an eventful five years. So I’m curious, Julie, how does one get invited to be on the Oversight Board? What kind of a background does one need to have, particularly yours? And why would one accept this role?

Julie Owono:

It’s a mix of everything. If you look at the 23 members that we are now, people stem from legal background. I’m myself, I’m a lawyer, but you have people who have cooler professions, who have been journalists. I think of my colleague, Endy Bayuni, who used to be editor-in-chief of the Jakarta Post in Indonesia, or thinking about Alan Rusbridger, my other colleague, who used to be editor-in-chief of The Guardian. But we also have a digital rights activist. I, myself, was also, that was the cool side about me, I’m also a digital rights activist. I advocate for freedom of expression on digital platforms and digital spaces in general. But my colleague, Nighat Dad, is leading the Digital Rights Foundation, which also advocates for freedom of expression and also for equality and much more respect for women’s rights online, because that’s another very big problem. So it’s a diverse group of professionally and academically accomplished individuals. I, myself, as I said, I am leading Internet Sans Frontières, which is a French NGO working to protect freedom of expression online. And I was also a lawyer. I practiced at the Paris Bar. I was working on the emerging technology law that nobody talked about, 10, 15 years ago, when I passed the Bar. And most importantly, I have to say, I also had the immense opportunity of joining the Berkman Klein Center right across the river, where I was invited as a fellow in 2019, precisely to work on these issues. At the time, I didn’t call that content moderation. At the time, my project was safeguarding free expression, while also dealing with harmful content online. So it was a very complex title. But yes, I guess I was already reflecting on, how can we make sure that we continue to see the good on the internet and on social media platforms specifically, while also addressing, in a human rights friendly manner, while also addressing harmful content that we see.

Brian Kenny:

Jesse, let me go back to you for a second and ask you, so the board is independent, as Julie described it. Who decides who gets invited? How are they funded? Is it truly independent, I guess, is the question?

Jesse Shapiro:

Yeah. And now, I’m used to being the professor in front of the classroom, but now, with Julie here, I feel like I’m the student and I hope I did my homework.

Brian Kenny:

You did.

Jesse Shapiro:

So as you say, the goal of the Oversight Board was to have an entity that was external to Facebook, now Meta, that could be involved in governance decisions, but wouldn’t be internal to the corporation. So how do you do that? So it is funded by an irrevocable trust. So there is basically money that gets put in. It does have to be renewed from time to time. It was renewed recently, but there’s money that’s put into an irrevocable trust. And then, the way that the personnel decisions were made, Meta had to be involved at some level, because it couldn’t just spring up from the ground. So Meta was involved in choosing some of the original members and governance structure, but it was designed from the beginning, so that, gradually, it would become self-sustaining and there would be a governance structure in place that was independent of Meta, that would decide on future members of the Oversight Board, future members of the governance organization that runs the Oversight Board, and so on. So that, gradually, it would be something that Meta did not have any direct hand in.

Brian Kenny:

So Julie, can you tell our listeners just how does the board operate? I assume you only get the cases that nobody else really knows how to deal with, but maybe you can talk a little bit about how cases rise to the board’s attention and give us some examples of some of the cases that you’ve had to think about.

Julie Owono:

Yes. First, before I do that, I really want to thank Jesse for stressing the fact that the board is absolutely independent. I know that structure might be complex, and I personally also was hesitant, but I have absolutely no regret having joined the Oversight Board, precisely because we have this freedom to make the consequential decisions that we do have to make sometimes and sometimes against our individual interests. But how do cases come to us? So it’s either you are a user and it’s either your content has been taken down or you are a user and you’re seeing things that you think shouldn’t be on the platform, and then, you ask us to ask Meta to take those down. Or the platform itself, Meta, referring cases, which are extremely complex for them to deal with. One of the most famous referred cases that we received directly from Meta was, of course, when the accounts of former President Trump had been suspended in the wake of the January the 6th insurrection. Once those cases come to us, we’ve received a lot, really, more than 2 million, I think, once we receive those appeals, we have a team that is tasked with making sure we can triage the real appeals from the ones that are more straightforward. Looking at all these details takes a lot, a lot of labor. And so, we have a team dedicated to doing that. And once that team has done that triage, then the cases that have been shortlisted can be sent to the case selection committee. And then, once this panel has come to a final determination on what to do in this specific case, then this case is sent to the rest of the board for reconciliation of different points of views, if we have really opposing views in some cases. And yes, then we vote and the case is published, we can make binding decisions. So very important, our decisions are binding. That means, if we tell Facebook, “You should take this down,” Facebook must take it down. There’s another part of our decisions that are recommendations, policy recommendations. What is interesting is that, although those are not binding, Meta must respond to those recommendations. And I have to say, having been a board member for the past almost four years, I have to say, this is the most important and interesting part, in my opinion, when it comes to holding the company accountable and being transparent about what is happening behind those closed doors in the Silicon Valley, as Jesse has rightly reminded us. Most of these companies are in the Bay Area and Facebook is in the Bay Area. So that dialogue that happens when Meta responds to our recommendation, and the response can be, “No, we cannot implement this, because it’s too complicated or X, Y, Z reason,” or, “Yes, we will implement this, and here’s how we’ll do it. Here’s the timeline.” For instance, they gave us a timeline for former President Trump and reinstating his accounts. That conversation is the most important, because this is where you actually learn what is happening behind the scenes. And this gives you even more food for thought to know what is the most pressing issue that we should try to address next. We also now do accept expedited review cases, summary decisions. Summary decisions allow us to respond to obvious errors, which have been acknowledged by the company. We can also make expedited reviews, which can be done as early as a week, in which we can give a response to a complex situation. So yes, it’s a work in progress. The board is evolving all the time, from an operational point of view, to make sure that we can live up to the difficult mission.

Brian Kenny:

Yeah. And Jesse, you mentioned earlier, so they’re only seeing a tiny, tiny fraction of the content that is actually taken off the platform through other means. The numbers are astronomical on that front, but I do want to talk a little bit about what Julie mentioned, where engaging in this dialogue back with Meta, they have a business to run, they know what the nuances of their business are. And if the board is making recommendations that are going to either impact their business or maybe be impossible to implement, because of the nature of their business, how does that work? Where’s the rub there?

Jesse Shapiro:

Yeah, I think this is one of the most interesting aspects of this case for business leaders, because it’s clear why it’s valuable to have external expertise, like Julie’s, in these conversations, and it’s clear why it’s valuable for that to be public and transparent, as Julie says, so that the board says something and Meta has to respond. And an instinct that we often hear in the classroom talking about this case and one that I understand and I think is interesting to explore is maybe the board should be able to make binding policy decisions, just the way that they make binding content decisions. So in the same way that the board can say, “This piece of content has to go back up,” or “This piece of constant has to come down,” and Meta has to listen, maybe the board should be able to say, “We need to make this change to Meta’s policies.” And where I think things get really interesting for business leaders to think about is this is a publicly traded company that has to have earnings calls and so on. What happens if, one day, the board makes a binding decision and that binding decision is going to cost a billion dollars over the next 12 months to test and implement? How will that factor into the P&Ls of the company? How will that factor into the planning, the earnings management of the company? I think these are really difficult questions, and I don’t know that we’ve settled on a perfect solution. But I think the board is one attempt to try to find a middle ground, where the board is able to have influence through being able to publicize its recommendations, but cannot force Meta to, say, change the way the business operates in a fundamental way. That’s a decision, ultimately, Meta gets to make.

Julie Owono:

Of course, business interests are extremely, extremely essential, but we also live in an era where customers, users, do care about your impact on our society, do care about how you treat human rights, how you treat human beings who use your platforms, and having an Oversight Board, just like the Meta Oversight Board, in which we try to make those principle decision, in which we try to constantly remind Facebook, “Yes, your business interests are extremely important and they’re not necessarily opposed to your interest to protect human rights and to protect your footprint, in terms of your social responsibility as a company.” And that, to me, is a fundamental message that more tech CEOs should be able to hear, being conscious of your impact, in terms of human rights. It’s not necessarily counterproductive when it comes to making money.

Brian Kenny:

Yeah. Yeah. Jesse, I want to turn back to you for a second, because I mentioned in the introduction, that it wasn’t just Mark Zuckerberg who got raked over the coals in that congressional session. It was Jack Dorsey, and it was the head of Google. All of these social networks have come under fire for similar transgressions. Are any of the others using this kind of an independent Oversight Board as a way to be more transparent about what they’re doing?

Jesse Shapiro:

This is a problem that every social media organization has confronted. And just about every company you would think of as a social media company, and some you might not think of that way, have some kind of advisory entity. And in many cases, those will include people who are not inside the company, but certainly, at the time it was created, and as far as I know, still today, the Oversight Board is the only example of one that can make binding decisions on the corporation, but is itself external to the corporation. So that, as far as I know, is a unique model. Certainly, it was when the Oversight Board was created, and I believe it remains that way today. So that leap, which again, I think is something that we love to explore in the classroom with future business leaders, from, “Well, would like to get some advice from experts” to, “we’d like to allow experts to actually make decisions in cases that we find difficult to adjudicate.” That is a leap that no other entity has taken, as far as I know.

Brian Kenny:

So we actually did a Cold Call episode a while back on the emergence of AI and Google and how it was leading to issues, diversity issues, because of the information that was being fed in the machine was creating bias. And the question about self-governing in that industry came up in that conversation. This was well before the folks from OpenAI sat down with Congress. But I guess it does beg the question, these are important societal issues that are emerging through these social media networks, should the government be more involved? Is there a role for federal regulation in helping to manage these kinds of things?

Julie Owono:

It is important for everyone to be involved when it comes to understanding limiting, mitigating, the negative impact of technology on society. But at the same time, we also want to be careful about government regulation. We’ve seen government regulation around the world, and I can, as someone who has studied many of the laws that have been passed, I can guarantee you that the aim is not to deal with the problems. The aim is to control platforms, where we have less control or zero control even. And well, thankfully, there are some spaces in the world where governments do not have control, especially when it comes to speech. So I would say regulation is extremely important, but it should be done with caution and it should be done in a manner that does not necessarily require from the government to supersede or to become the platform and to decide for the platform. That, for me, is the limit that we should keep in mind. Thinking about safeguards, absolutely. Thinking about the infrastructure that those companies must have when they want to get involved in the business of making people speak freely on platforms, that, for me, is absolutely the role of governments, but saying that governments should be able to tell what is true, what is not, who is a fact checker, who is not a fact checker, this is very complicated. And I hope we will sit down in democratic societies to lead by example and to show that it is possible to provide good regulation without necessarily interfering with freedom of expression of users.

Brian Kenny:

Jesse, do you have a point of view on that?

Jesse Shapiro:

Yeah, this is something we talked about a lot amongst ourselves, my co-authors and I, as we were working on the case. And we have some, I think, really interesting quotes from Julie’s colleague, Mike McConnell, who is a constitutional law scholar at Stanford and a former appellate judge and one of the founding co-chairs of the board about this topic. And I guess my take on this is there are going to be areas where I think most of us would be very comfortable with a role for a government. For example, many societies today have prohibitions against child pornography. And if you have private platforms that are not taking steps to remove it or prevent it, my guess is that many people would be comfortable with a role for the government coming in and backstopping that and making sure that those prohibitions are enforced. But I think, if you flip around to a more expansive view of what a role for government would look like in a case like this, it very easily turns into the government deciding what’s true, deciding what’s information versus misinformation, and heavily regulating speech, which we have strong restraints on in the US and many other modern democracies. And I think there’s a good reason for those restraints. So I think, heading into that territory, things get a lot more complicated. It becomes much more difficult to come up with a good governance model, that really involves the government in a central way. And I think that’s one of the reasons why understanding the role of the board and other organizations like it is so important.

Brian Kenny:

This has been a fabulous conversation, as I knew it would be. I’ve got one question left for each of you, and I’ll start with you, Julie. You are five years in, four or five years into this experiment, and I’m just wondering, how do you measure impact? How do you know if you’re actually achieving what you set out to do?

Julie Owono:

Great question, because we just published, in June this year, our second annual report, which covered what we did in 2022. And I do have some great figures to share. So first of all, as I was saying, we have our decisions that we make. And so far, we have made more than 100 recommendations to Meta, of which they have decided to implement a great number of those. One that I can think of, for instance, which may seem very obvious now, but making sure to have the community standards translated in virtually all the languages that is supported, sorry, in which Facebook and Instagram, because we covered both platforms. Or another recommendation that we made out of a very recent case that we worked on was related to political speech. So there is a case in which people in Iran were using a phrase which basically meant, I think, “Death to Khamenei.” Facebook used to take down that phrase, despite the fact that it was an obvious political speech, an obvious cry to criticize the government, including during the demonstration that we saw in earlier this year in Iran, and specifically around the death of this young lady. We told Meta, “You should not take down this.” And so now, Meta has committed, not to take down that phrase in the context of demonstrations, because now, they know, they understand that this is important. Yes, that’s how we measure our impact, how much of the policies Facebook has changed, how much of the experience as a user has changed. Do you now know why your content is being taken down? Do you know what community standards you’ve violated? Do we know more about dangerous individuals and organizations? How are they designed? Who decides who is dangerous? All of this is part of the impact package of the board, if you will. Of course, it is frustrating, because you don’t see it. The moment we made the decision three years ago, people could not see immediately that now I get a notification to know why my content has been taken down. But three years in, people are happy to see that. As a user, I’m extremely reassured to know that it was not just an arbitrary decision made potentially on the request of a very repressive government that doesn’t agree with what I say. As a digital rights advocate and as a user myself, I think it is important that, finally, those users feel treated fairly by a platform that they care so much about.

Brian Kenny:

Jesse, I’m going to give you the last word, as one of the authors of the case. If you can tell us what you’d like listeners to really take away from this case, what would it be?

Jesse Shapiro:

I think what this case shows is that these for-profit social media corporations have enormous potential to affect our society. And for that reason, I think, have an obligation to themselves, as Julie said, and then, also, more broadly, to use that power responsibly. And I think that the Oversight Board may not be perfect. It’s probably not the final iteration of this process, but I think it’s an extremely important example for people to study and learn about. Because I think these problems are going to remain with us and maybe become more important going forward.

Brian Kenny:

Thank you both for joining me on Cold Call.

Julie Owono:

Thank you, Brian.

Jesse Shapiro:

Thank you, Brian. Thank you, Julie.

Julie Owono:

Thanks, Jesse.

Brian Kenny:

If you enjoy Cold Call, you might like our other podcasts, After Hours, Climate Rising, Deep Purpose, Idea Cast, Managing the Future of Work, Skydeck, and Women at Work. Find them on Apple, Spotify, or wherever you listen, and if you could take a minute to rate and review us, we’d be grateful. If you have any suggestions or just want to say hello, we want to hear from you. Email us at coldcall@hbs.edu. Thanks again for joining us. I’m your host, Brian Kenny, and you’ve been listening to Cold Call, an official podcast of Harvard Business School and part of the HBR Podcast Network.

 Read more

Brian Kenny:

On May 16 of 2023, Sam Altman, the CEO of OpenAI, did something that no tech CEO had ever done before. He implored the US government to regulate his industry. One member seemed to speak for the rest in saying “The time for self-regulation is over. It’s time we legislate to hold you accountable.” Could he be right? Today on Cold Call, we welcome senior lecturer, Jesse Shapiro, and special guest, Julie Owono, to discuss the case, “Independent Governance of Meta’s Social Spaces: The Oversight Board.” I’m your host, Brian Kenny, and you’re listening to Cold Call on the HBR Podcast Network. Jesse Shapiro is a visiting professor of business administration at Harvard Business School. He’s in the entrepreneurship unit. Julie Owono is the Executive Director of Internet Sans Frontières and an inaugural member of the Facebook Oversight Board, and she’s the protagonist in today’s case. Thank you both for joining me today.

Jesse Shapiro:

Excited to be here.

Julie Owono:

Thanks for having me.

Brian Kenny:

Great to have you both here. I know our listeners will really appreciate hearing about this. So thanks for being here to talk about it. Jesse, I’m going to start with you by asking you what the central issue is in the case, and what your cold call is when you start the discussion.

Jesse Shapiro:

So the way I look at it, the central issue in this case is that Facebook, now Meta, is a for-profit corporation, that, nevertheless, has very wide ranging social impact. And the question is, how does this for-profit corporation govern that social impact in a responsible way? And we usually like to start the discussion in class by asking a really easy cold call, which is, how do you use Facebook? And as you probably know, Brian, the company, Facebook, was founded not that far from where we’re sitting, over in Kirkland House across the river. And initially, it was used by college students to socialize and date and things like that. And still today, our students will often give answers, like they’ll use it for planning events or staying in touch with friends, sharing birthday photos, things like that. But we use that as a jumping off point to draw contrast with some of the use cases that we highlight in the case. For example, uses by journalists, whose work relies very critically on social media as an outlet for communicating what they’re doing.

Brian Kenny:

And I like to remind people too, that, even though a lot of young people, I’ll use my children as they’re 20 somethings as the example, they don’t use Facebook anymore, but Facebook is still the elephant in the room. It’s an enormous platform. Is that right?

Jesse Shapiro:

Absolutely. Yeah. Facebook is incredibly important, both as a business, but then, also, as a platform for communicating. And in the case, we get into talking about the fact that Facebook has all kinds of potential ramifications for society, as a means of communicating journalistic findings, as a way of coordinating social movements, so lots of uses that are distant from those that our students will usually come up with when we ask them the cold call.

Brian Kenny:

Yeah. Why did you decide to write the case, Jesse?

Jesse Shapiro:

I think, as you mentioned in the introduction, the public conversation around social media emphasizes many things, like the fact that people communicate with other people who are like them, that social media may have the potential to create things like echo chambers. But a lot of those things have actually been with us, that is to say been with humanity, for a really long time, at least as long as we’re aware. One of the things that I think is really new about social media is the fact that that communication is mediated and that it’s centrally governed by a for-profit corporation headquartered in the Bay Area. I think that is something that is really unprecedented in the history of human communication. And so, I think it’s something that is more new than is appreciated and something that I think business leaders and social leaders need to really think about.

Brian Kenny:

Yeah, we hear a lot about AI these days. I mentioned it in the introduction, and I think a lot of people assume that tools like AI are going to really make this whole task of monitoring content much easier and much more straightforward. I’m wondering if you have any thoughts about whether or not that’s true.

Jesse Shapiro:

Well, of course, and I’m sure Julie knows a lot more about this than I do, but of course, at the scale at which Meta is removing content, more than a billion pieces of content a quarter, more than a hundred million pieces of content, even if you exclude things that are sort of obvious spam, there’s no way for a human to take a look at all of that and make decisions. And so, of course, machines are going to do a lot of the on the ground proximate work, just like spam filters have been doing that for us since time in memorial with email. But at the end of the day, what the machines are going to do has to be governed by rules, and there have to be people who make decisions about how to trade off different values when they come into conflict. And I think that technology’s not going to change that. That’s going to be something that we, as a society, or we, humans, have to make decisions about.

Brian Kenny:

Yeah, that’s a great point in the conversation to turn to the Oversight Board and to over to you, Julie. Again, thanks for being here with us today. We’re beaming you in from somewhere. So thank you for beaming in, but I’m wondering if you can tell us a little bit about the Oversight Board, why it was created, and what it’s intended to do.

Julie Owono:

Yes, thanks so much, Brian. I’m, first of all, extremely glad to be here. And what Jesse has just said about the need for rules is a great introduction to actually remember where we’re coming from and when the board, the idea of the board, started emerging. If you remember, in 2018, we all woke up in a horror, finding out that a platform that we all use, Facebook had been used to commit a genocide against Rohingya populations in Myanmar. That, for me, was a terrifying wake up call, a wake up call, because as a free expression advocate I had to come to the realization that, yes, we do need rules, we do need limits, but then, who is going to set those limits? And when the conversation started emerging about setting those limits and who should be accountable and how should we make sure that accountability is real? The first response that came pretty much worried me. The first response was having the CEO of Facebook, Mark Zuckerberg, being grilled at Congress. It was not directly related to the Rohingya situation. It was related to another terrible situation, which was using Facebook to corrupt an election or corrupt a referendum. So the responses that I heard coming in from the CEO of Facebook did not reassure me at the time. And there were other loud voices in the digital rights space that I know very well, because I’ve been an advocate for the past 15 years, I want to say. The loud voices in the space started saying, “Okay, somebody needs to make sure that whatever Mark Zuckerberg is telling us right now is exactly what is happening and there is some form of oversight or at least some form of accountability.” And that’s when we first started hearing seriously about making sure Facebook could be held accountable. And then, very rapidly, Mark Zuckerberg himself, posted a publication on his Facebook profile. He also advocated for the same thing. He said, “I should not be the one making the decisions. I totally agree with that.” And later on, the company hired a person in charge of human rights.

And I have to say, five years in, there’s still a lot to work on. It’s still work in progress. But I’m confident that, finally, from a historic perspective, when it comes to communications and making sure that communications do not disrupt the peace in our societies, do not disrupt the democratic principles that we care so much about, I am confident to say that probably the Oversight Board is one aspect of the solution that we’ve been waiting for, but of course, I’m totally not biased.

Brian Kenny:

That’s okay. That’s why you’re here. And I’m sure it’s been an eventful five years. So I’m curious, Julie, how does one get invited to be on the Oversight Board? What kind of a background does one need to have, particularly yours? And why would one accept this role?

Julie Owono:

It’s a mix of everything. If you look at the 23 members that we are now, people stem from legal background. I’m myself, I’m a lawyer, but you have people who have cooler professions, who have been journalists. I think of my colleague, Endy Bayuni, who used to be editor-in-chief of the Jakarta Post in Indonesia, or thinking about Alan Rusbridger, my other colleague, who used to be editor-in-chief of The Guardian. But we also have a digital rights activist. I, myself, was also, that was the cool side about me, I’m also a digital rights activist. I advocate for freedom of expression on digital platforms and digital spaces in general. But my colleague, Nighat Dad, is leading the Digital Rights Foundation, which also advocates for freedom of expression and also for equality and much more respect for women’s rights online, because that’s another very big problem. So it’s a diverse group of professionally and academically accomplished individuals. I, myself, as I said, I am leading Internet Sans Frontières, which is a French NGO working to protect freedom of expression online. And I was also a lawyer. I practiced at the Paris Bar. I was working on the emerging technology law that nobody talked about, 10, 15 years ago, when I passed the Bar. And most importantly, I have to say, I also had the immense opportunity of joining the Berkman Klein Center right across the river, where I was invited as a fellow in 2019, precisely to work on these issues. At the time, I didn’t call that content moderation. At the time, my project was safeguarding free expression, while also dealing with harmful content online. So it was a very complex title. But yes, I guess I was already reflecting on, how can we make sure that we continue to see the good on the internet and on social media platforms specifically, while also addressing, in a human rights friendly manner, while also addressing harmful content that we see.

Brian Kenny:

Jesse, let me go back to you for a second and ask you, so the board is independent, as Julie described it. Who decides who gets invited? How are they funded? Is it truly independent, I guess, is the question?

Jesse Shapiro:

Yeah. And now, I’m used to being the professor in front of the classroom, but now, with Julie here, I feel like I’m the student and I hope I did my homework.

Brian Kenny:

You did.

Jesse Shapiro:

So as you say, the goal of the Oversight Board was to have an entity that was external to Facebook, now Meta, that could be involved in governance decisions, but wouldn’t be internal to the corporation. So how do you do that? So it is funded by an irrevocable trust. So there is basically money that gets put in. It does have to be renewed from time to time. It was renewed recently, but there’s money that’s put into an irrevocable trust. And then, the way that the personnel decisions were made, Meta had to be involved at some level, because it couldn’t just spring up from the ground. So Meta was involved in choosing some of the original members and governance structure, but it was designed from the beginning, so that, gradually, it would become self-sustaining and there would be a governance structure in place that was independent of Meta, that would decide on future members of the Oversight Board, future members of the governance organization that runs the Oversight Board, and so on. So that, gradually, it would be something that Meta did not have any direct hand in.

Brian Kenny:

So Julie, can you tell our listeners just how does the board operate? I assume you only get the cases that nobody else really knows how to deal with, but maybe you can talk a little bit about how cases rise to the board’s attention and give us some examples of some of the cases that you’ve had to think about.

Julie Owono:

Yes. First, before I do that, I really want to thank Jesse for stressing the fact that the board is absolutely independent. I know that structure might be complex, and I personally also was hesitant, but I have absolutely no regret having joined the Oversight Board, precisely because we have this freedom to make the consequential decisions that we do have to make sometimes and sometimes against our individual interests. But how do cases come to us? So it’s either you are a user and it’s either your content has been taken down or you are a user and you’re seeing things that you think shouldn’t be on the platform, and then, you ask us to ask Meta to take those down. Or the platform itself, Meta, referring cases, which are extremely complex for them to deal with. One of the most famous referred cases that we received directly from Meta was, of course, when the accounts of former President Trump had been suspended in the wake of the January the 6th insurrection. Once those cases come to us, we’ve received a lot, really, more than 2 million, I think, once we receive those appeals, we have a team that is tasked with making sure we can triage the real appeals from the ones that are more straightforward. Looking at all these details takes a lot, a lot of labor. And so, we have a team dedicated to doing that. And once that team has done that triage, then the cases that have been shortlisted can be sent to the case selection committee. And then, once this panel has come to a final determination on what to do in this specific case, then this case is sent to the rest of the board for reconciliation of different points of views, if we have really opposing views in some cases. And yes, then we vote and the case is published, we can make binding decisions. So very important, our decisions are binding. That means, if we tell Facebook, “You should take this down,” Facebook must take it down. There’s another part of our decisions that are recommendations, policy recommendations. What is interesting is that, although those are not binding, Meta must respond to those recommendations. And I have to say, having been a board member for the past almost four years, I have to say, this is the most important and interesting part, in my opinion, when it comes to holding the company accountable and being transparent about what is happening behind those closed doors in the Silicon Valley, as Jesse has rightly reminded us. Most of these companies are in the Bay Area and Facebook is in the Bay Area. So that dialogue that happens when Meta responds to our recommendation, and the response can be, “No, we cannot implement this, because it’s too complicated or X, Y, Z reason,” or, “Yes, we will implement this, and here’s how we’ll do it. Here’s the timeline.” For instance, they gave us a timeline for former President Trump and reinstating his accounts. That conversation is the most important, because this is where you actually learn what is happening behind the scenes. And this gives you even more food for thought to know what is the most pressing issue that we should try to address next. We also now do accept expedited review cases, summary decisions. Summary decisions allow us to respond to obvious errors, which have been acknowledged by the company. We can also make expedited reviews, which can be done as early as a week, in which we can give a response to a complex situation. So yes, it’s a work in progress. The board is evolving all the time, from an operational point of view, to make sure that we can live up to the difficult mission.

Brian Kenny:

Yeah. And Jesse, you mentioned earlier, so they’re only seeing a tiny, tiny fraction of the content that is actually taken off the platform through other means. The numbers are astronomical on that front, but I do want to talk a little bit about what Julie mentioned, where engaging in this dialogue back with Meta, they have a business to run, they know what the nuances of their business are. And if the board is making recommendations that are going to either impact their business or maybe be impossible to implement, because of the nature of their business, how does that work? Where’s the rub there?

Jesse Shapiro:

Yeah, I think this is one of the most interesting aspects of this case for business leaders, because it’s clear why it’s valuable to have external expertise, like Julie’s, in these conversations, and it’s clear why it’s valuable for that to be public and transparent, as Julie says, so that the board says something and Meta has to respond. And an instinct that we often hear in the classroom talking about this case and one that I understand and I think is interesting to explore is maybe the board should be able to make binding policy decisions, just the way that they make binding content decisions. So in the same way that the board can say, “This piece of content has to go back up,” or “This piece of constant has to come down,” and Meta has to listen, maybe the board should be able to say, “We need to make this change to Meta’s policies.” And where I think things get really interesting for business leaders to think about is this is a publicly traded company that has to have earnings calls and so on. What happens if, one day, the board makes a binding decision and that binding decision is going to cost a billion dollars over the next 12 months to test and implement? How will that factor into the P&Ls of the company? How will that factor into the planning, the earnings management of the company? I think these are really difficult questions, and I don’t know that we’ve settled on a perfect solution. But I think the board is one attempt to try to find a middle ground, where the board is able to have influence through being able to publicize its recommendations, but cannot force Meta to, say, change the way the business operates in a fundamental way. That’s a decision, ultimately, Meta gets to make.

Julie Owono:

Of course, business interests are extremely, extremely essential, but we also live in an era where customers, users, do care about your impact on our society, do care about how you treat human rights, how you treat human beings who use your platforms, and having an Oversight Board, just like the Meta Oversight Board, in which we try to make those principle decision, in which we try to constantly remind Facebook, “Yes, your business interests are extremely important and they’re not necessarily opposed to your interest to protect human rights and to protect your footprint, in terms of your social responsibility as a company.” And that, to me, is a fundamental message that more tech CEOs should be able to hear, being conscious of your impact, in terms of human rights. It’s not necessarily counterproductive when it comes to making money.

Brian Kenny:

Yeah. Yeah. Jesse, I want to turn back to you for a second, because I mentioned in the introduction, that it wasn’t just Mark Zuckerberg who got raked over the coals in that congressional session. It was Jack Dorsey, and it was the head of Google. All of these social networks have come under fire for similar transgressions. Are any of the others using this kind of an independent Oversight Board as a way to be more transparent about what they’re doing?

Jesse Shapiro:

This is a problem that every social media organization has confronted. And just about every company you would think of as a social media company, and some you might not think of that way, have some kind of advisory entity. And in many cases, those will include people who are not inside the company, but certainly, at the time it was created, and as far as I know, still today, the Oversight Board is the only example of one that can make binding decisions on the corporation, but is itself external to the corporation. So that, as far as I know, is a unique model. Certainly, it was when the Oversight Board was created, and I believe it remains that way today. So that leap, which again, I think is something that we love to explore in the classroom with future business leaders, from, “Well, would like to get some advice from experts” to, “we’d like to allow experts to actually make decisions in cases that we find difficult to adjudicate.” That is a leap that no other entity has taken, as far as I know.

Brian Kenny:

So we actually did a Cold Call episode a while back on the emergence of AI and Google and how it was leading to issues, diversity issues, because of the information that was being fed in the machine was creating bias. And the question about self-governing in that industry came up in that conversation. This was well before the folks from OpenAI sat down with Congress. But I guess it does beg the question, these are important societal issues that are emerging through these social media networks, should the government be more involved? Is there a role for federal regulation in helping to manage these kinds of things?

Julie Owono:

It is important for everyone to be involved when it comes to understanding limiting, mitigating, the negative impact of technology on society. But at the same time, we also want to be careful about government regulation. We’ve seen government regulation around the world, and I can, as someone who has studied many of the laws that have been passed, I can guarantee you that the aim is not to deal with the problems. The aim is to control platforms, where we have less control or zero control even. And well, thankfully, there are some spaces in the world where governments do not have control, especially when it comes to speech. So I would say regulation is extremely important, but it should be done with caution and it should be done in a manner that does not necessarily require from the government to supersede or to become the platform and to decide for the platform. That, for me, is the limit that we should keep in mind. Thinking about safeguards, absolutely. Thinking about the infrastructure that those companies must have when they want to get involved in the business of making people speak freely on platforms, that, for me, is absolutely the role of governments, but saying that governments should be able to tell what is true, what is not, who is a fact checker, who is not a fact checker, this is very complicated. And I hope we will sit down in democratic societies to lead by example and to show that it is possible to provide good regulation without necessarily interfering with freedom of expression of users.

Brian Kenny:

Jesse, do you have a point of view on that?

Jesse Shapiro:

Yeah, this is something we talked about a lot amongst ourselves, my co-authors and I, as we were working on the case. And we have some, I think, really interesting quotes from Julie’s colleague, Mike McConnell, who is a constitutional law scholar at Stanford and a former appellate judge and one of the founding co-chairs of the board about this topic. And I guess my take on this is there are going to be areas where I think most of us would be very comfortable with a role for a government. For example, many societies today have prohibitions against child pornography. And if you have private platforms that are not taking steps to remove it or prevent it, my guess is that many people would be comfortable with a role for the government coming in and backstopping that and making sure that those prohibitions are enforced. But I think, if you flip around to a more expansive view of what a role for government would look like in a case like this, it very easily turns into the government deciding what’s true, deciding what’s information versus misinformation, and heavily regulating speech, which we have strong restraints on in the US and many other modern democracies. And I think there’s a good reason for those restraints. So I think, heading into that territory, things get a lot more complicated. It becomes much more difficult to come up with a good governance model, that really involves the government in a central way. And I think that’s one of the reasons why understanding the role of the board and other organizations like it is so important.

Brian Kenny:

This has been a fabulous conversation, as I knew it would be. I’ve got one question left for each of you, and I’ll start with you, Julie. You are five years in, four or five years into this experiment, and I’m just wondering, how do you measure impact? How do you know if you’re actually achieving what you set out to do?

Julie Owono:

Great question, because we just published, in June this year, our second annual report, which covered what we did in 2022. And I do have some great figures to share. So first of all, as I was saying, we have our decisions that we make. And so far, we have made more than 100 recommendations to Meta, of which they have decided to implement a great number of those. One that I can think of, for instance, which may seem very obvious now, but making sure to have the community standards translated in virtually all the languages that is supported, sorry, in which Facebook and Instagram, because we covered both platforms. Or another recommendation that we made out of a very recent case that we worked on was related to political speech. So there is a case in which people in Iran were using a phrase which basically meant, I think, “Death to Khamenei.” Facebook used to take down that phrase, despite the fact that it was an obvious political speech, an obvious cry to criticize the government, including during the demonstration that we saw in earlier this year in Iran, and specifically around the death of this young lady. We told Meta, “You should not take down this.” And so now, Meta has committed, not to take down that phrase in the context of demonstrations, because now, they know, they understand that this is important. Yes, that’s how we measure our impact, how much of the policies Facebook has changed, how much of the experience as a user has changed. Do you now know why your content is being taken down? Do you know what community standards you’ve violated? Do we know more about dangerous individuals and organizations? How are they designed? Who decides who is dangerous? All of this is part of the impact package of the board, if you will. Of course, it is frustrating, because you don’t see it. The moment we made the decision three years ago, people could not see immediately that now I get a notification to know why my content has been taken down. But three years in, people are happy to see that. As a user, I’m extremely reassured to know that it was not just an arbitrary decision made potentially on the request of a very repressive government that doesn’t agree with what I say. As a digital rights advocate and as a user myself, I think it is important that, finally, those users feel treated fairly by a platform that they care so much about.

Brian Kenny:

Jesse, I’m going to give you the last word, as one of the authors of the case. If you can tell us what you’d like listeners to really take away from this case, what would it be?

Jesse Shapiro:

I think what this case shows is that these for-profit social media corporations have enormous potential to affect our society. And for that reason, I think, have an obligation to themselves, as Julie said, and then, also, more broadly, to use that power responsibly. And I think that the Oversight Board may not be perfect. It’s probably not the final iteration of this process, but I think it’s an extremely important example for people to study and learn about. Because I think these problems are going to remain with us and maybe become more important going forward.

Brian Kenny:

Thank you both for joining me on Cold Call.

Julie Owono:

Thank you, Brian.

Jesse Shapiro:

Thank you, Brian. Thank you, Julie.

Julie Owono:

Thanks, Jesse.

Brian Kenny:

If you enjoy Cold Call, you might like our other podcasts, After Hours, Climate Rising, Deep Purpose, Idea Cast, Managing the Future of Work, Skydeck, and Women at Work. Find them on Apple, Spotify, or wherever you listen, and if you could take a minute to rate and review us, we’d be grateful. If you have any suggestions or just want to say hello, we want to hear from you. Email us at coldcall@hbs.edu. Thanks again for joining us. I’m your host, Brian Kenny, and you’ve been listening to Cold Call, an official podcast of Harvard Business School and part of the HBR Podcast Network.

HBS Working Knowledge

“Harvard Business School is the graduate business school of Harvard University, a private research university in Boston, Massachusetts. It is consistently ranked among the top business schools in the world and offers a large full-time MBA program, management-related doctoral programs, and executive education programs.”

 

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.