You are currently viewing How AI Skews Our Sense of Responsibility

Research shows how using an AI-augmented system may affect humans’ perception of their own agency and responsibility.

May 13, 2024

Reading Time: 5 min 

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.

More in this series

Matt Chinworth/theispot.com

As artificial intelligence plays an ever-larger role in automated systems and decision-making processes, the question of how it affects humans’ sense of their own agency is becoming less theoretical — and more urgent. It’s no surprise that humans often defer to automated decision recommendations, with exhortations to “trust the AI!” spurring user adoption in corporate settings. However, there’s growing evidence that AI diminishes users’ sense of responsibility for the consequences of those decisions.

This question is largely overlooked in current discussions about responsible AI. In reality, such practices are intended to manage legal and reputational risk — a limited view of responsibility, if we draw on German philosopher Hans Jonas’s useful conceptualization. He defined three types of responsibility, but AI practice appears concerned with only two. The first is legal responsibility, wherein an individual or corporate entity is held responsible for repairing damage or compensating for losses, typically via civil law, and the second is moral responsibility, wherein individuals are held accountable via punishment, as in criminal law.

What we’re most concerned about here is the third type, what Jonas called the sense of responsibility. It’s what we mean when we speak admiringly of someone “acting responsibly.” It entails critical thinking and predictive reflection on the purpose and possible consequences of one’s actions, not only for oneself but for others. It’s this sense of responsibility that AI and automated systems can alter.

To gain insight into how AI affects users’ perceptions of their own responsibility and agency, we conducted several studies. Two studies examined what influences a driver’s decision to regain control of a self-driving vehicle when the autonomous driving system is activated. In the first study, we found that the more individuals trust the autonomous system, the less likely they are to maintain situational awareness that would enable them to regain control of the vehicle in the event of a problem or incident. Even though respondents overall said they accepted responsibility when operating an autonomous vehicle, their sense of agency had no significant influence on their intention to regain control of the vehicle in the event of a problem or incident. On the basis of these findings, we might expect to find that a sizable proportion of users feel encouraged, in the presence of an automated system, to shun responsibility to intervene.

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.

More in this series

Reprint #:

65427

“The MIT Sloan Management Review is a research-based magazine and digital platform for business executives published at the MIT Sloan School of Management.”

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.