Topics
Responsible AI
The Responsible AI initiative looks at how organizations define and approach responsible AI practices, policies, and standards. Drawing on global executive surveys and smaller, curated expert panels, the program gathers perspectives from diverse sectors and geographies with the aim of delivering actionable insights on this nascent yet important focus area for leaders across industry.
In collaboration with
More in this series
For the third year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us gain insights into how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. This year, we examine organizational capacity to address AI-related risks. In our previous article, we asked our experts about the need for AI-related disclosures. This month, we offered the following provocation: There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. A majority of our experts disagree with that statement (with 72% disagreeing or strongly disagreeing), citing a fragmented global landscape of requirements. Nevertheless, many of our panelists say global companies have a responsibility to implement RAI across the organization. Below, we share insights from our panelists and draw on our own RAI experience to offer global companies recommendations for navigating the complexity to achieve their RAI goals.
Aligning on RAI Principles and Codes of Conduct Is an Urgent Global Priority
Our panelists acknowledge that there is a growing global consensus around the urgency of RAI, as well as considerable alignment on core principles across international organizations, technical standards bodies, and other multilateral forums. H&M Group’s Linda Leopold observes that “from large international initiatives like the G7’s Hiroshima Process on Generative AI and the Global Partnership on Artificial Intelligence to AI risk management frameworks like NIST’s, as well as AI governance standards from ISO and IEEE, and existing and emerging regulations … there are recurring overarching themes, such as fairness, accountability, transparency, privacy, safety, and robustness.” Data Privacy Brasil’s Bruno Bioni contends that “these multilateral policy forums play a critical role in setting agendas, implementing principles, and facilitating information sharing.” Carl Zeiss AG’s Simone Oldekop believes that as a result, “there has been progress in the international alignment of codes of conduct and standards for global companies in the area of AI governance.”
Putting Global Principles Into Practice Remains a Work in Progress
Translating conceptual RAI frameworks into technical standards and enforceable regulations is another matter. Harvard Business School’s Katia Walsh explains that while “global companies generally agree on the principles around using AI in safe, ethical, and trustworthy ways … the reality of implementing specifics in practice is very different.” Walsh notes that addressing the ethical dilemmas that emerge from AI use is “by definition, not straightforward.” Automation Anywhere’s Yan Chow agrees that common concepts like “algorithmic bias, data privacy, transparency, and accountability are multifaceted and context dependent.”
Some experts attribute this fragmentation to the lack of a common taxonomy and definitions. For example, IAG’s Ben Dias notes that “the U.S. and EU define AI as encompassing all machine-based systems that can make decisions, recommendations, or predictions. But the U.K. defines AI by reference to the two key characteristics of adaptivity and autonomy.” And National University of Singapore’s Simon Chesterman asserts that despite the scores of standards that have been developed by agencies, industry associations, and standards bodies like the International Telecommunication Union, the International Organization for Standardization, the International Electrotechnical Commission, and the Institute of Electrical and Electronics Engineers, “there is no common language across the bodies, and many terms routinely used with respect to AI — fairness, safety and transparency — lack agreed-upon definitions.”
Others point to cultural and political differences as a cause of inconsistency. Stanford CodeX fellow Riyanka Roy Choudhury notes that “translating abstract principles into concrete policies remains difficult, especially given the diversity of cultural contexts.” The Responsible AI Institute’s Jeff Easely says, “Different countries have different priorities and approaches to AI regulation. Some are more focused on privacy … while others prioritize innovation or national security.” Chow explains, “Companies operating globally must navigate this complex landscape, balancing diverse cultural, ethical, and legal expectations while maintaining consistent internal practices.” Taking the point further, Nasdaq’s Douglas Hamilton contends that international bodies like the EU are implementing “rules that overreach their geographic borders” in a way that stymies “policy harmonization.”
Neither agree nor disagree
As the University of Helsinki’s Teemu Roos notes, “International standards are being established. However, how they will be enforced through binding regulation, and how such regulation can be harmonized globally, remains to be seen.” The result, as United Nations University’s Tshilidzi Marwala puts it, is “a fragmented landscape with conflicting regulations across jurisdictions.” Similarly, Unico IDtech’s Yasodara Cordova says, “Although global companies may be subject to various international standards, local regulations often differ significantly. This inconsistency can lead companies to prioritize compliance with local laws over adopting global standards.” And Chesterman worries that “organizations will simply pick the standards they like and follow them.” Mastercard’s Shamina Singh cautions that fragmentation of standards “is likely to lead to bad outcomes for everyone, including businesses, people, and society.”
Global Fragmentation Creates Implementation Challenges
Many panelists, including Easley, believe that this global fragmentation “makes it challenging for global companies to implement a uniform RAI strategy” or “ensure consistent implementation” of RAI requirements. In other words, when a single company is trying to meet a variety of different local standards and requirements, it ends up with inconsistent RAI implementations across its own organization. EnBW’s Rainer Hoffman agrees that “the lack of standardized alignment complicates the implementation of RAI, particularly for organizations that lack dedicated resources for this purpose.” And Oldekop similarly recognizes that “each organization and country has developed its own standards and codes based on its specific challenges and needs. … This leads to challenges with the implementation of requirements.”
Ellen Nielsen, formerly of Chevron, asserts that “different legal systems, cultural norms, and political climates influence RAI approaches, making complete alignment unlikely.” Roos agrees that while global governance of AI could potentially mimic the “Brussels effect, meaning that other regions imitate European regulatory frameworks,” he believes that “it is unlikely that either the U.S. or China, for instance, would fully comply with Europe’s (relatively) heavy-handed regulations.” Oxford Internet Institute’s Johann Laux offers this example: “The current standardization process in the EU shows just how difficult alignment between EU and international standards can be when it comes down to choosing concrete technical and managerial solutions.” TÜV AI.Lab’s Franziska Weindauer adds, “Even within the EU and with the EU AI Act that came into force on Aug. 1, 2024, the resulting requirements in practice are still under development” and “many unresolved questions remain.”
Despite Imperfect Alignment, Global Companies Must Be Proactive
Some of our panelists believe that there is sufficient alignment for companies that are serious about RAI to take action. Aboitiz Data Innovations’ David R. Hardoon argues that while there is always room for improvement, there is “[absolutely] sufficient alignment on the fundamentals for companies to effectively implement RAI requirements across the organization.” Similarly, ForHumanity’s Ryan Carrier cites a “baseline of regulatory infrastructure … including expert and robust governance/oversight/accountability, risk management, data governance, human oversight, monitoring, and ethical oversight, with transparency and disclosure requirements.” Likewise, Dias notes that “all countries generally seem to be taking a risk-based approach to AI regulation,” adding that “the risks they are highlighting are not unreasonable and are the risks companies would want to mitigate against regardless of regulation.”
Strongly agree
Bekele advises companies “to develop robust, adaptive compliance strategies that align with the strictest standards.” Nielsen’s advice: “Global companies need to invest in local or regional expertise to comply with laws … and be agile and proactive in their approach to compliance, regularly updating their practices to align with the latest developments in local, regional, and international standards.” For Mozilla Foundation’s Mark Surman, “The key right now is for companies deploying AI to see what frameworks work and don’t work in practice and for governments to pay attention to these practical experiments.” He adds, “It’s only through this back-and-forth that we’ll get to responsible AI requirements that truly work across all industries and all of society.”
Recommendations
For global organizations seeking to implement RAI in the face of fragmented standards, we offer the following recommendations:
1. Invest in regulatory compliance. A fragmented regulatory landscape is nothing new for global companies. Invest in core compliance infrastructure, including adequate staffing, expertise, training, and resources. Leverage experience in other global compliance efforts, such as those related to data protection and privacy, to inform an AI compliance strategy. Ensure that regulatory and standards tracking is in place to keep pace with the latest developments; designated teams must have ample time to proactively adapt their RAI practices to align with new compliance requirements that emerge.
2. Align on an internal standard. Benchmark applicable regulations, frameworks, and standards for all jurisdictions in which the company operates. Decide whether to implement a single global standard or to adapt to local requirements. Clearly set this definition for all functions of the organization to follow.
3. Focus on core principles. When it comes to AI, regulatory compliance is table stakes. Where regulatory gaps or gray areas exist or an organizational benchmark has yet to be established, focus on the common core of RAI principles, whether borrowed from multilateral policy forums, international standards bodies, or other codes of conduct, and adapt them based on social, cultural, political, and commercial realities.
4. Don’t let perfect be the enemy of the good. International alignment on AI, or on any other technology, for that matter, is highly unlikely. Global companies that prioritize RAI won’t wait to implement requirements organizationwide but will find a way to navigate the uncertainty by investing in regulatory compliance and internal benchmarking — remaining guided by core principles — while accepting that achieving RAI goals may take some time and iteration. They will need approaches that are as adaptive as the technology itself. Waiting to act is not an option. Achieving regulatory compliance will be a complex process, and companies need as much time as possible to put appropriate RAI practices in place.
“The MIT Sloan Management Review is a research-based magazine and digital platform for business executives published at the MIT Sloan School of Management.”
Please visit the firm link to site