The Creeping Risk of “AI Ick”
Customer-facing applications built on generative artificial intelligence can get weird—and fast. Smith faculty unpack the implications

Generative artificial intelligence is upending many facets of business—including the way corporations interact with their customers. In the two-plus years since ChatGPT unleashed previously unthought-of capabilities onto the world, tech giants have accelerated the development of autonomous AI “agents,” equipping them with human-like avatars and chatty language in hopes of improving the ways that their business clients deliver customer service.
This is causing great excitement among business circles. According to a Gartner study published at the end of last year, 85 per cent of surveyed service leaders plan to explore or test customer-facing technology that incorporates conversational generative AI in 2025. The logic makes sense: Handling customer queries and complaints has traditionally been a money pit, and most of those who actually work in sales and service believe AI will help them meet customer needs faster, more efficiently and with greater efficacy. This should benefit everyone: As Booking Holdings CEO Glenn Fogel put it in a 2024 interview with The Verge, “I don’t know anybody who ever enjoyed waiting on hold to speak to someone to fix the problem.”
But in the rush to foist generative AI on their customers, organizations might be exposing themselves to an unforeseen risk: Deploying technology faster than the degree to which their customers are comfortable with it. “The number of companies implementing or introducing AI applications, across pretty much all customer touchpoints, has grown very fast,” says Ceren Kolsarici, associate professor and Ian R. Friendly Fellow of Marketing at Smith School of Business, and director of the school’s Scotiabank Centre for Customer Analytics. “It’s quickly become very hard for consumers to avoid interacting with or engaging with AI in their daily lives.”
Some customers have no problem dealing with bots, but Smith experts say others are skeeved out by it—especially if things glitch or get weird. And that can negatively affect such business-driving variables as brand perception, satisfaction and purchase intent. It’s therefore important for brands working to implement generative AI into customer-facing apps to understand—and weigh—the factors that can contribute to “AI ick.” Here are three of the big ones:
AI still scares a lot of people
The rapid ubiquity of GenAI might make it seem as if everyone’s on board with the technology in their daily lives, but more people are concerned about it than are excited by it. In fact, skepticism is growing, according to some assessments.
Kolsarici says AI fears stem from multiple sources. Many people are concerned about automation threatening their livelihoods: “People are worried about whether AI will take over their jobs,” she says. There’s also increasing awareness of misinformation and manipulation: “We’re starting to see a lot more AI-generated deepfakes and biased algorithms, which is making it harder for people to trust what they see online,” she says. Finally, the potential risk of personal data leaks doesn’t sit right among privacy-minded individuals: “People are uneasy about how much power these AI companies have.”
These fears aren’t universal and may lessen as the technology improves and becomes more deeply embedded in day-to-day life. But today, Kolsarici says it’s unwise to assume any customer base is free of AI naysayers. “There is real resistance, and I think there will continue to be.”

No one likes the uncanny valley
Many of today’s more sophisticated customer-facing apps have the veneer of authenticity, with avatars that mimic human faces and language that emulates how humans might speak with one another. But according to Shamel Addas, associate professor and Distinguished Research Fellow of Digital Technology at Smith, few are quite polished enough to escape the so called “uncanny valley”—the psychological theory that probes why people often recoil when exposed to something almost, but not quite, human. “The way our brains process these kinds of artifacts can make us very uncomfortable around them,” Addas explains.
Addas is currently leading a research project testing how the anthropomorphism of AI applications—that is, their human-like attributes—affect how people respond to them. “AI agents can increase trust, but they can also undermine it,” he notes. People tend to like it when bots employ basic human-like conversational mannerisms, he says, because it triggers the brain to apply normal rules and expectations to the interaction. But folks start to bristle when things approach verisimilitude—for example, when an animated avatar looks mostly, but not totally, real.
This unease often takes place subconsciously, Addas says, but people know when the vibe is off—and it can plant distrust. He puts forward a hypothetical scenario involving a bank client encountering a human-like video avatar in query about an investment. “Say the face looks kind of realistic, but there’s something going on—maybe the eyes are moving weirdly,” he says. “Even if the product is great, even if the recommendations are spot-on, if there’s something creepy about the interaction, it might be enough for the customer to leave the chat—and maybe move to another company.”
Bot errors hit harder
Generative AI tools have matured incredibly quickly, and with great sophistication, but they remain only as good as the data they draw from.
Sometimes, that’s data that end-users haven’t shared in the context of the interaction. Most people have had the off-putting experience of mentioning something to a friend in real life, then having it show up in an ad on their phone. Wherever you stand on the unresolved matter of whether our devices are listening to us, companies undeniably have more data on customers than ever before—and bots aren’t always judicious in what they do with it. When an AI-powered app presents information that a customer hasn’t actively (or even recently) shared, it can feel like a violation. “Every consumer will find this creepy,” affirms Kolsarici.
Then there’s the matter of data hygiene: The information GenAI apps draw from is often flawed, biased or outright incorrect. This can cause all sorts of problems, including the phenomenon some refer to as “hallucination”—when chatbots and their ilk pump out false information with complete self-assurance. “Organizations like Open AI have come quite a long way in minimizing hallucinations, but they still happen,” Kolsarici says. “As a company, you don’t want to be facing the consequences of an AI mistake.”
Glitches like these rarely inspire customer goodwill. Of course, people make mistakes, too. But while most people might extend a bit of empathetic grace to an awkward or confidently wrong human, we’re far less likely to extend the same to a machine. “Companies need guardrails around the speed and scale of AI introduction,” Kolsarici says. “If you push the boundaries too much, it won’t benefit you in the long term.”
Prudence might seem anathema to the frontier spirit of AI innovation, but in Addas’s view, it’s needed to minimize end-users’ visceral negative reactions to the technology. He points out that people are most comfortable with AI interactions when they feel they have autonomy (that is, some control over the situation) and when they are given explanations for why and how the tool arrives at its conclusions.
Such reassuring conditions do not come automatically, but smart companies will do the work to create them, Addas says. “A sentiment like ‘I don’t trust the technology’ very quickly spills over into ‘I don’t trust the brand behind it,’” he explains. “So, it’s really important to get it right.”