Skip to main content

Who Is Accountable When AI Goes Wrong?

If developers working in artificial intelligence refuse to give users the opportunity to identify, judge, and fix mistakes, then they must be accountable for when biases take over

Who Is Accountable When AI Goes Wrong?

To see how views on machine learning have changed in recent years, consider the experience of Facebook’s News Feed. Introduced 12 years ago, the feature is the primary way users are exposed to content that is posted on the network. Initially, Facebook users were not impressed with the News Feed algorithms that dictate who sees what content, but any backlash was muted as users kept clicking and Facebook kept growing.

Today, News Feed is an important part of our digital lives; it is estimated that 60 percent of people around the world get almost all their news from Facebook. News Feed is just as automated as before, but now we’re prone to see its dark side — specifically, the potential for bias that is baked into the feed.

As concerns grew, Facebook’s initial reaction was to reassure users that its central feature was completely automated and that there was nothing to worry about. Now that News Feed is seen as too important to be left to full automation, Facebook developers have been forced to introduce more human judgment into the mix.

To Kirsten Martin, the News Feed experience illustrates both the evolving views of artificial intelligence (AI) and a road map for how to deal with the inevitable missteps arising from AI-driven products and processes. We may be wiser about AI’s shortcomings, says Martin, a researcher at George Washington University’s School of Business, but we have yet to figure out who must be held accountable for mistakes or ethical lapses.

“We shouldn’t be shocked that biases occur, whether augmented by AI or as the result of a normal human decision,” she says. “We can, however, ask the human decision maker, Why didn’t you hire that person? Why did you discard that widget on the manufacturing floor? Why did you sentence that person to nine years instead of 12? The problem with AI is that it’s hard to ask an AI-powered system these questions because they are inanimate.”

Martin was speaking recently at the Ethics and AI Conference, organized by the Scotiabank Centre for Customer Analytics at Smith School of Business.

"There’s a questionable history of people who claim that something is too complicated, that they just have to be trusted, and, by the way, that they’re also not responsible for the outcomes"

She says that when designers work with artificial intelligence, they inevitably have to consider how AI-related mistakes will be governed. Are users able to judge the mistakes and fix them when they are identified? In most cases, they are not.

“Developers may claim it’s proprietary information. They may say, We can’t put this decision in context of past decisions because you may be able to guess how the decision was made, which we think is proprietary. Often we’re told something is too confusing to explain. There’s a questionable history of people who claim that something is too complicated, that they just have to be trusted, and, by the way, that they’re also not responsible for the outcomes.”

Martin argues that if developers put a black box around an algorithm, they must then take responsibility for the biases and mistakes that result.

The News Feed experience also highlights a related question: How much decision-making power should AI be given?

Martin says it depends on the type of decision, and that we may move towards a form of threshold model in which AI has a limited role in decisions that are considered pivotal, such as the ability to get insurance or sentencing in a criminal trial, and a more expansive role in less weighty decisions.

“For moral comfort zones, we want humans in the loop,” she says. “Even with autopilot technology, we are still more comfortable with a human pilot at the controls. At the other end of the spectrum, we could accept large automated decisions being made for tasks that are relatively inconsequential.”