Skip to main content

Explaining Away AI Insights

|

In many situations, we’re held back by an irrational aversion to algorithmic advice. Would an explanation from an algorithm help?

A chatbot making a housing rental recommendation
iStock/atakan

Ten years ago, Airbnb launched a tool that executives hoped would delight hosts who use the platform to rent their properties. Smart Pricing, as it is known, is driven by machine-learning algorithms that allow hosts to optimize their nightly rates based on a variety of market-setting factors. Similar to systems offered by ridesharing platforms such as Uber and Lyft, Smart Pricing presents hosts with recommendations that they can either follow or override with their own pricing.

Given the stakes involved, surely hosts would jump at the opportunity to use the cold logic of dynamic pricing rather than the hot takes of biased human decision-making. Why not avoid overpricing or underpricing a property? But often, hosts do not follow cold logic. In fact, Smart Pricing can be a hard sell, as Reddit social media chatter shows.

Practitioners trying to solve this problem say the biggest barrier to getting hosts to adhere to algorithmic pricing systems is human psychology. People tend to put more value on what they own than on what others own, and this bias is what drives rental unit hosts to believe AI advisors underprice their property. 

Resistance to follow the lead of an AI advisor is not limited to questions of pricing. The phenomenon is known as algorithm aversion, and it has been shown that the more serious the consequences of a decision, the more frequently it occurs.

Researchers have identified several reasons why algorithm aversion occurs: distrust of technology, concerns about biased model recommendations, or uncertainty about the quality of AI recommendations. We tend to penalize errors by an AI advisor more severely than errors by a fellow human, so a misstep by an algorithm significantly decreases our willingness to follow its recommendation.

It wouldn’t be such a problem if human decision-makers ran circles around AI advisors but, in many settings, the opposite is true, and the gap will only widen in the years ahead as models are refined. Now the consensus is that for an AI recommendation to be considered seriously, it should be accompanied by an explanation made up of key factors and data points.

But is that consensus well founded? Evidence suggests otherwise. And we still need to know whether decision-makers are more or less likely to follow advice from an AI advisor than from a fellow human, and whether explanations make any difference.

Lessons from the lab

New research provides some clues. The study was conducted by Tracy Jenkin and Anton Ovchinnikov of Smith School of Business, Cecilia Ying, who is completing her PhD at Smith, and Stephanie Kelley of Sobey School of Business, who also teaches at Smith and is a PhD alumnus.

Fittingly, their study centred on a fictitious short-term online rental operation patterned after Airbnb. The researchers enlisted 363 university business students for a lab experiment. The students played the part of rental hosts and were told that, based on historical data, similar units were successfully rented when priced between $110 and $170. Participants chose an initial rental price and were then provided with a pricing recommendation from either a human or AI advisor based on current market factors. They were given the option of asking for an explanation from their advisor before finalizing their rental price. (All recommendations were designed to increase the participants’ expected rental revenue compared to their initial price.)

To test how cognitive biases shape our interactions with advisors, particularly when their recommendations do not line up with our expectations, participant groups were given different advice by human and AI advisors. One group of participants was advised to lower the initial rental price, while the second group was advised to raise it. The recommendations must have seemed counterintuitive, particularly to those in the first group: Why reduce the initial rental price if they already had a high probability of renting their fictitious units?

This is where the bias of loss aversion kicked in. Generally, the emotional impact of a loss is felt more intensely than the joy of an equivalent gain. It is also true that changes in price (such as a rental rate) are felt more keenly than changes in probabilities (such as the odds of successfully renting a unit). Driven by these biases, the study participants in the first group — those recommended a rental price lower than they expected — were significantly less likely to accept it than those in the second group who were advised to raise their initial rental rate.

Interestingly, the study participants were somewhat more likely to go along with a recommendation from an AI advisor, even if that recommendation was lower than expected. The reason, researchers suggest, is that people think of AI advisors as more competent in data-driven tasks than human advisors.

The reluctance to accept advisor recommendations is only part the story coming out of the lab. The study also showed that when confronted with a higher or lower than expected price recommendation, only half of the study participants sought an explanation — and they were even less likely to seek an explanation from an AI advisor than a human one. Even those who sought an explanation often chose to stick with their initial decision. So much for the power of explanations.

According to the study’s researchers, this finding suggests that people may view AI advisors as more competent in data-driven tasks but also as less likely to provide an explanation they could understand. They may be right: People tend to have difficulty integrating probability estimates that are typically featured in AI explanations into their decision-making. The explanation fails to hit home and, as a result, is easy to ignore.

Rethinking the value of explanations

The study results give organizations seeking to encourage adoption of AI tools food for thought. Algorithm aversion is a persistent phenomenon, as are cognitive biases such as loss aversion. It is hard to move someone away from their preferred decision (such as a rental price) if the suggested, albeit better, alternative makes them feel they are taking a loss, though at least an AI advisor can be a bit more convincing than a human advisor.

AI system designers may need to rethink the effectiveness of providing an explanation alongside a recommended action. The fact that only half the participants sought an explanation, and that they were less likely to do so from an AI advisor, highlights the need to better understand when and why individuals seek explanations of AI-based recommendations. Could it be that few people believe they would understand an AI explanation, as earlier research has suggested, or that the AI explanations may be of poor quality, as this study’s researchers were told by practitioners in their preliminary interviews?

And for all of us who interact with algorithmic decision aids, there is a valuable lesson here on how to best manage this new relationship. Check your algorithm aversion at the door.

“What would be ideal,” says Tracy Jenkin, one of the researchers behind the study, “is to look at the advice, interrogate it, ask for an explanation if you don’t understand it, and determine whether you should or should not adhere to the algorithm.”