
Amazon, Netflix, Spotify and TikTok – three global platforms that harness recommendation algorithms to keep giving consumers more of what they want. Their success is a testament to the incredible potential of this technology to enhance the consumer experience. The value of being recommended a new film, your next book, a great song or even a funny cat video, selected to perfectly match your mood and tastes, seems unquestionable.
However, in a recent paper, my co-authors and I caution consumers against an over-reliance on AI algorithms when making such picks. Specifically, we highlight how AI has the potential to constrain the human experience and ultimately limit individuals’ choices.
A limit on choice
The effectiveness of AI-driven recommendation algorithms lies in their ability to leverage past consumer behaviour. These models review a user’s previous choices to selectively curate future content or products that match those past preferences. From this perspective, the system shapes what we see and what we can select.
While this is often very useful, it may ultimately constrain our self-determination. With an AI algorithm, consumers are much less likely to be shown content or products that don’t match their past choices. Demonstrate an interest in a particular topic or genre and chances are you will be fed more of the same, taking you down a rabbit hole of related content at the expense of exploring other more unlikely but no less valid options.
Compare this to a visit to a physical bookstore, where we are greeted by a central display of books. Attracted by the cover and blurb of one particular title, we buy it even though it has no link to our previous reading history. It turns out to be an amazing read and introduces us to a previously unknown author.
A cost to consumers
Such overreliance on past consumer preferences can also impact the range of content shown to consumers with similar tastes. For example, a music-selecting algorithm sees that lots of users have previously listened to Taylor Swift, so it pushes her songs to more people. This popularity bias means that the market share for already popular products increases. Marginal (less popular) choices are overlooked by the algorithm, and the range of recommendations we are given becomes ever narrower.
Besides limiting choice, there is a financial implication for consumers. A lack of exposure means marginal products struggle to survive. This can lead to monopolies for the more popular recommendations, and a lack of competition means prices for these can rise, with the consumer left to pick up the bill.
The danger of objectification
Existing AI-driven recommendation models tend to exhibit a bias towards objectification, reducing individual and community characteristics to a limited set of features or data points. Often, the model fails to adequately capture, or under-represents, an individual’s actual preferences and instead delivers outcomes based on the larger group or category the individual belongs to.
This can lead to unfair or inefficient treatment in areas like loan approvals, hiring policies or pricing due to biased algorithms. Such simplification can also lead to a mischaracterisation of individual preferences. Anyone who has tried to get a service bot to adapt to their personal circumstances will have been frustrated by its inability to respond appropriately.
By reducing individuals to mere functions and scores, AI systems can limit our experiences and perpetuate subtle dehumanisation. This oversimplification can misrepresent our true preferences, leading to poor decisions or limited outcomes. What’s more, it can erode trust in AI, especially in sensitive areas like healthcare where understanding human uniqueness is crucial.
Increasing transparency
Such oversimplification underlines perhaps the biggest issue consumers face when being delivered algorithm-driven decisions: we don’t know what objective or goal an algorithm is designed to pursue. Is it primed to discount environmentally unsound firms when making investment suggestions, or was it just recommending firms that gave the greatest returns?
In the absence of being able to comprehend all the parameters that algorithms are based on – a problem that computer scientists refer to as unexplainable AI – being able to understand the goal the algorithm pursues could help individuals put greater trust in those predictions. It would certainly allow consumers to make a more informed choice on whether to accept those decisions or continue to search for something more suitable.
Greater personalisation
Allowing consumers to personalise those parameters would go a long way towards giving them a greater sense of control and trust. For example, a user could have the ability to select whether they want the quickest, shortest or most scenic route between two destinations. Or perhaps, if they want to get fit, they could tweak their TikTok algorithm to show fewer videos of trending dances and more workout routines.
Building such personalisation into the interface presents development challenges, but it would give the consumer a greater sense of control and, in turn, greater trust in the suggested choices. It might also benefit firms as much as consumers. For instance, a study found that consumers who were fed content tailored to their ideal preferences didn’t just find it more helpful but were much more likely to reuse the service and more willing to pay for it.
A more balanced perspective
Not only does the homogenisation of outcomes ignore the nuance of individuals. It also restricts access to alternative views, voices and perspectives.
As we’ve seen in recent election campaigns around the world, it can be particularly problematic when delivering contentious or political content. By amplifying more extreme views at the expense of more reasonable (but less popular) arguments,recommendation algorithms can lead to the creation of echo chambers. An individual’s existing viewpoints or opinions are simply reinforced or amplified at the expense of all others.
To combat such polarisation and foster greater empathy, AI systems should be designed to expose users to diverse stimuli, perspectives and opinions. Allowing consumers to see both sides of an argument could help develop a broader understanding of an issue. Not only does this give individuals an opportunity to change their minds, it also helps foster greater compassion and respect for the apparent “enemy”.
Introducing serendipity
Building on this is the idea of developing AI systems that are more flexible and less tied to past preferences. Just because consumers listen to country music more often than jazz does not mean they only want recommendations for more sad ballads and honky tonk tunes.
Analysing past preferences from a longer consumption time period could be one way to deliver more balanced recommendations. Another solution is to allow consumers to select the degree to which the algorithm recommends previously consumed categories and how much it delivers serendipitous or unrelated content.
As individual consumers, we should have the ability to make our own choices. We can always choose to skip something if we don’t like it, but only if given the opportunity. By being given more unpredictable options, we could end up discovering a love for 1970s Nigerian funk or 18th century chamber music, which we never previously knew we had.
Broadening horizons
It is important that governments are aware of these challenges. Current regulation tends to focus on clear, measurable factors like bias or restricting price competition. However, it should also consider how the technology is used in real-world settings and how its features can interact with human behaviour to create results that limit users’ development and expression of their preferences.
If we want to retain freedom and serendipity in consumer choice, we need to adopt a user-centred approach to developing new AI systems – one that enhances transparency, exploration and personalisation and doesn’t restrict users to reliving the past but instead enriches the gamut of human experience.
Without opportunities to make new discoveries, those chance encounters that can make life so rich will be closed off to us.
“INSEAD, a contraction of “Institut Européen d’Administration des Affaires” is a non-profit graduate-only business school that maintains campuses in Europe, Asia, the Middle East, and North America.”
Please visit the firm link to site