You are currently viewing Managing the Human Risks of Biometric Applications

The intimate surveillance afforded by biometric technologies requires managers to consider negative impacts on privacy and human dignity.

August 29, 2024

Reading Time: 9 min 

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.

More in this series

Gary Waters/Ikon Images

In April, Colorado became the first state to mandate that companies protect the privacy of data generated from a person’s brain waves, an action spurred by concerns over the commercial use of wearable devices intended to monitor users’ brain activity. Use of those and other devices that enable the collection of humans’ physiological data warrants robust discussion of the legal and moral implications of the increasing surveillance and datafication of people’s lives.

Biometric technologies measure intimate body characteristics or behaviors, such as fingerprints, retinas, facial structure, vein patterns, speech, breathing patterns, brainwaves, gait, keystroke patterns, and other movements. While much activity in the field has focused on authenticating individuals’ identities in security applications, some biometric technologies are touted as offering deeper insights into humans’ states of mind and behaviors. It’s when these biometric capabilities are put into play that companies may endanger consumers’ trust.

Managers in some organizations see the potential for analyzing highly granular physiological data to improve operational efficiency. Financial institutions Barclays and Lloyds use heat-sensing devices under desks to monitor which workstations are used frequently in order to improve office layouts, desk assignments, and resource allocation. Mining company BHP uses smart hats that measure brainwaves to detect and track truckers’ levels of fatigue, to protect them and improve company safety. These applications can benefit both the companies and their employees.

On the other hand, even the most well-intended applications of biometrics can solicit heightened levels of creepiness, which, according to human-computer interaction researchers, refers to the feelings of unease when technology extracts information that users unknowingly or reluctantly have provided. This feeling is exacerbated when consumers fear that biometric information may be used to harm them or discriminate against them.

Balancing these conflicting interests is tricky, and compromising one in favor of the other can have costly consequences for organizations. Public opposition to Amazon Fresh’s use of video surveillance at checkout, and accusations that video recordings of customers were being analyzed by offshore workers, contributed to the grocery chain eventually discontinuing video surveillance in its stores. Such examples give rise to an important conversation about whether and how organizations can deploy biometrics without being creepy and without violating people’s rights to be respected and treated ethically.

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.

More in this series

Reprint #:

66121

“The MIT Sloan Management Review is a research-based magazine and digital platform for business executives published at the MIT Sloan School of Management.”

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.