Skip to main content

Why Humans and AI Assistants Need Relationship Advice

|

AI-assisted decision-making shows great promise, but human biases get in the way

A human hand and robot hand pointing at a pie chart.
shutterstock/Igor Link

Humans have been making decisions ever since Adam and Eve bit into the forbidden fruit of the Tree of Knowledge. Over centuries, evolution has refined decision models that help us — alone or in teams — to consider options, assess potential outcomes and decide on the best course of action.

Today we are in the early days of a new evolutionary spiral: shared decision-making not with other humans but with powerful assistants driven by artificial intelligence. It is not fanciful to imagine humans one day being left out of the loop. At least for the foreseeable future, however, a growing number of us will collaborate with algorithmic experts and achieve better outcomes than we could come up with alone. In this model, an AI assistant provides recommendations or predictions that a human evaluates and approves or rejects.

The future of decision-making has already arrived in banking, energy, legal counselling, health care, insurance, retail and other areas of the economy. AI assistants work in medical diagnostics to analyze medical records and test results. In hospitals, they help determine the optimum number of nurses per shift. In banking, they detect fraudulent activities and assess risks. In website management, they help content moderators evaluate the credibility of social media posts.

So what does the experience with AI-assisted decision-making so far tell us about our working relationship with algorithmic colleagues? It tells us that there are significant tension points that need to be resolved, and that this is all just a precursor of massive changes to come. Consider these four questions.

Based on experience to date, do humans and AI assistants work well together?

The collaboration has not exactly been a smashing success. Evidence suggests that individuals working with AI assistants typically outperform individuals working alone. Still, their performance is usually inferior to AI making decisions without human supervision.

When all goes right, human decision-makers consider their own insights when determining whether a recommendation from a computational model should be followed. Unfortunately, things rarely go right because we can’t seem to get out of our own way. We either accept random predictions or faulty decisions without verifying whether the AI is correct. Or we mistrust the AI model and ignore highly accurate recommendations.

Humans seem to struggle to detect algorithmic errors, says Tracy Jenkin, an associate professor at Smith School of Business and a faculty affiliate at the Vector Institute for Artificial Intelligence.

“What we’re finding is that, in some cases, there is algorithmic aversion where individuals just don’t want to adhere to the recommendations of those AI systems,” says Jenkin, who is studying human-AI collaboration with Smith colleague Anton Ovchinnikov. “They’d rather listen to the advice of humans or themselves even though the algorithms outperform humans.”

“On the other hand, there’s also algorithmic appreciation where individuals will just go along with the AI’s advice even though it might be wrong,” she says.

There are many reasons why people struggle with AI assistants. Previous studies identified 18 different factors ranging from lack of familiarity and human biases to demographics and personality.

We are predisposed to overestimate AI capabilities, for example, when our brains are taxed trying to solve complex tasks or when we lack self-confidence. (It doesn’t help when misguided managers, in the interest of encouraging AI use, denigrate human decision-making abilities.)

Alternatively, when we lack information on how well an AI model performs, we rely on our own spider sense or focus on irrelevant information.

On the bright side, humans seem to be better at calibrating trust in AI assistants when they are part of a team. Groups have been shown to have higher confidence when they overturn the AI model’s incorrect recommendations and they appear to make fairer decisions.

Wouldn’t humans trust an AI recommendation if they are given the reasoning behind the recommendation?

In principle, explainable AI — which is intended to make AI decisions and predictions understandable and more transparent — should help the human decision-maker critically assess the AI recommendation rather than rely on technology to always have the right answer.

Unfortunately, it doesn’t work that way. In study after study, explanations have failed to reduce overreliance on AI. Sometimes, transparency is worse than not getting an explanation. It has been suggested the mere presence of an explanation increases trust and that explanations anchor humans to the AI prediction. Sometimes, people are not even interested in an explanation or doubt they would understand it from the AI assistant.

“When it’s a favourable recommendation and [people] ask for an explanation,” says Jenkin, “they’re much more likely to actually adhere to the recommendation of a human adviser.” On the other hand, when it’s an unfavourable recommendation, such as an AI recommendation to an apartment owner to lower the rental price of a unit, people are more likely to adhere to the AI advice than a human adviser. 

If people don’t respond to explainable AI as hoped, are there other promising ideas to improve AI-assisted decision-making?

A set of interventions known as “cognitive forcing functions” has the potential to get human decision-makers to engage with AI assistants more thoughtfully. For example, asking an individual to decide on an issue before seeing the AI’s recommendation can get around the so-called anchoring bias that can be triggered when we’re presented with an AI recommendation. Even delaying the presentation of an AI recommendation can lead to better outcomes.

It has been shown that cognitive forcing interventions significantly reduce overreliance compared to explainable AI approaches.

MMAI Program

Encouraging human decision-makers to consider second opinions may also be a winning strategy. One study looked at the outcomes when second opinions from other human peers or another AI model are presented to a decision-maker in addition to the AI recommendation. When an investor, for example, is about to buy or sell a stock based on an AI model’s recommendation, would a second opinion from investors on an online discussion forum help them make better investment decisions?

The study found that when both the AI model’s decision recommendation and a second opinion are presented together, decision-makers reduce their overreliance on AI but increase under-reliance. But when decision-makers have control to decide when to solicit a peer’s second opinion, it has the potential to mitigate overreliance on AI without triggering increased under-reliance.

Still, other experts argue that the design of AI decision systems should account for human biases. Instead of providing the most accurate recommendation for a decision, the theory goes, the AI assistant would be designed to encourage critical reflection. An AI model, for example, could be designed to be a devil’s advocate, presenting counter-arguments to the individual’s initial choice.

What does the future of organizational decision-making look like beyond the next five years?

The trajectory of decision-making technology is going way beyond AI-driven assistants.

Humans today still have an edge on AI decision systems in some areas. We are better than AI at noticing subtle patterns in unstructured data. And we have an easier time accessing insights across organizational boundaries; a decision-maker can visit a supplier’s site and pick up valuable intel or talk to policymakers about political trends.

These advantages will last for another five years, some experts say. The tipping point will be when businesses en masse equip employees with wearable recording devices and cameras, with all that data fed into machine-learning algorithms.

Consider too the advances in computational cognitive modelling that will enable AI systems to know us better than we know ourselves. Computational cognitive modelling allows researchers to model the cognitive state of humans and make predictions regarding their mental state, beliefs and knowledge.

At a certain point in the not so distant future, why bother keeping humans in the loop?