You are currently viewing The Hazards of Putting Ethics on Autopilot

Research shows that employees who are steered by digital nudges may lose some ethical competency. That has implications for how we use the new generation of AI assistants.

May 08, 2024

Reading Time: 6 min 

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.

More in this series

Aad Goudappel/theispot.com

The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. But if they are not thoughtfully implemented, they risk diminishing employees’ decision-making competency, especially when ethics are at stake.

Our examination of the consequences of “nudging” techniques, used by companies to influence employees or customers to take certain actions, has implications for organizations adopting the new generation of chatbots and automated assistants. Companies implementing generative AI agents are encouraged to tailor them to increase managerial control. Microsoft, which has made copilots available across its suite of productivity software, offers a tool that enterprises can customize, thus allowing them to more precisely steer employee behavior. Such tools will make it much easier for companies to essentially put nudging on steroids — and based on our research into the effects of nudging, that may over time diminish individuals’ own willingness and capacity to reflect on the ethical dimension of their decisions.

AI-based nudges may be particularly persuasive, considering the emerging inclination among individuals to discount their own judgments in favor of what the technology suggests. At its most pronounced, this abdication of critical thinking can become a kind of techno-chauvinistic hubris, which discounts human cognition in favor of AI’s more powerful computational capacities. That’s why it will be particularly important to encourage employees to maintain a constructively critical perspective on AI output and for managers to pay attention to opportunities for what we call ethical boosting — behavioral interventions that utilize mindful reflection, as opposed to mindless reaction. This can help individuals grow in ethical competence, rather than allowing those cognitive skills to calcify.

Digital nudges, especially in the form of salient incentives and targets, can lead to subtle motivational displacement by obfuscating the ultimate aims of the team or organization and shifting proximal goals. When a performance measure becomes the main objective, it ceases to function as an effective measure, a phenomenon known as Goodhart’s law. For example, copilots might be designed to nudge customer-facing workers to maintain five-star ratings by offering bonus points or financial rewards.

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.

More in this series

Reprint #:

65410

“The MIT Sloan Management Review is a research-based magazine and digital platform for business executives published at the MIT Sloan School of Management.”

Post details > link to site


You can also contribute and send us your Article.


Interested in more? Learn below.