Skip to main content

The ABCs of AI Illiteracy

|

Are organizations willing to do the hard work of capturing the value of AI agents, or are they blinded by a shiny new technology?

A light bulb with AI text and a digital brain and other icons emerging from an open book. AI literacy concept.
iStock/pcess609

“AI Literacy is Trending in Schools,” The New York Times recently pronounced. Given the odd mix of giddiness and angst surrounding generative AI, it’s hardly surprising that educators feel the pressure to prepare their students for the brave new world that awaits them. So, increasingly, we see tweens learning to prompt chatbots, identify misinformation and create their own apps.

AI literacy is on the minds of organizational leaders as well, though it is the cost of AI illiteracy that keeps them up at night. Some serious coin is underwriting an AI-driven overhaul of operations, with underwhelming results. Maybe businesspeople need schooling in the basics of this transformative technology.

Stephen Thomas sits in the world between the workers of tomorrow and the workplace needs of today. As a Distinguished Teaching Fellow of Management Analytics at Smith School of Business, he teaches both university students and multinational executives how to best harness the power of AI. He is in a unique position to see how the issue of AI literacy is playing out in education and organizations.

In this conversation, Thomas shared his views with Smith Business Insight Senior Editor Alan Morantz.

I came across this pithy commentary on AI in today’s workplaces, referring to organizations metaphorically buying Ferraris but using them as garden sheds. Unfair comment or direct hit?  

I can believe it. I just read about how the average enterprise is spending more than $85,000 a month on AI-native applications, but that the value they’re receiving is just not there, mostly due to wasted resources, bad prompts or automation gone wrong. One study concluded that only about 15 to 20 per cent of the workforce can be considered highly proficient users of AI. 

Companies are still very much feeling around in the dark in terms of what will work and won’t work. Even though ChatGPT has been out for more than two years now, it’s still early in how these organizations are using the tools. Some of them are further ahead than others, and some teams are ahead of other teams within the same organization. But most are at the very beginning of the learning curve. 

It’s a rather expensive discovery process, isn’t it? 

It is. It’s a huge operational cost, and an opportunity cost. So, back to the Ferrari being used as a garden shed.

If we define AI literacy in an organizational setting as the ability to understand, evaluate, use and ethically engage with the technology, what does AI illiteracy look like in the wild?

It would be not knowing which tool to use: Claude, ChatGPT, a custom version? And which model? Each tool has many different models, and they’re updated all the time. Or being ignorant of the costs of what you’re asking it to do. Claude Opus is twice as expensive as Google’s Gemini, and many times more expensive than Claude Sonnet. So if you don’t need Opus, you’re wasting money or using your allotted quota much faster.

It means not understanding the effect of prompt engineering, basically the wording of the prompt. Bad prompts are expensive, not only because they cost money from paying for more usage, but also because they cause hallucinations or errors that lead to wrong decisions being made or bad customer advice being shared. I saw one report that suggested hallucinations cost U.S. enterprises $68 billion a year.

It means lack of awareness of the privacy considerations and other internal best practices that may or may not exist. 

To be AI illiterate is to accept the output as is without fact checking or using it for the wrong use case. GenAI excels at some things and is bad at others. In our executive education courses, we call it the jagged frontier of capability: AI tools can solve some tasks so easily but fail others spectacularly. It’s very hard for humans to understand this.

You talk to many corporate executives. I’m sure they all say that AI is essential and are backing it up with enterprise-wide directives. But many of these executives couldn’t explain how a large language model actually works if their lives depended on it. So does AI literacy need to start in the executive suite or in the middle of the organization? 

I think it needs to be done across the organization, with different courses taught at different levels. Anecdotally, in the past few months, I’ve been actively teaching senior executives at a global financial services company. I was blown away at the number of leaders who didn’t realize that large language models could be wrong. I thought, I’m so glad you’re here in this class.

Explaining Away AI Insights
Readers Also Enjoyed Explaining Away AI Insights

There are a lot of misconceptions, especially at the executive level. Maybe even more so at the executive level because executives have less time to scrutinize every news feed that they come across. So they might trust in the wrong things, the exaggerated claims and overly bold predictions. Plus, there’s the FOMO aspect — they don’t want to be the last one to commit to AI. So, education at the executive level is absolutely critical. It needs to be ongoing, not a one-time thing. The course I taught last week is so much different than the course I taught a year ago. And the course I teach next year, who knows?

In what way is it different?

Last week, we talked a lot about AI agents, how to set them up, how they can use actual tools to do math, book appointments, build dashboards, check the weather. We talked about RAG as the de facto best practices for querying internal documents. (Retrieval-Augmented Generation is an AI framework that improves large language model accuracy by retrieving data from trusted external and internal sources.) Almost none of this was part of my course a year ago, when the focus was on explaining what large language models are and what they might be used for.

Now, we’re getting more into the ethical and responsibility pieces. What are the regulations? What’s happening in the U.S., in Canada and Europe? It’s absolute chaos everywhere: Executive orders being made and then rescinded in the U.S., Canada’s Bill C-27 being reviewed and then dying on the table, Quebec’s proposed regulations on responsible use of AI. None of that existed a year ago.

Do some organizations believe the responsibility for developing AI literacy falls on community colleges, government programs or their vendors rather than on themselves? 

I think so. It’s a surprise to a lot of C-suite executives about how hard AI is. It’s very hands on. I get blank stares when I tell them they should take on the role of AI governance. I tell them, ‘This is your job. You have to have monthly meetings and report to the board biannually on the status and ROI.’ They say, ‘I don’t want to do that, I’m already too busy.’ They just want AI to start saving them money as soon as they pay the subscription fee.

To make it worse, OpenAI, Anthropic and Google all promise such big things. They publish flashy case studies about how an organization saved $100 million with their tool. Now here I am telling them, ‘You have to spend even more cleaning up your data, building an infrastructure, hiring expensive experts, and training everyone in the company a couple times a year. You have to change your operating models.’ It’s just hard.

It’s kind of like exercising. Show them a picture of Arnold Schwarzenegger and everyone says, ‘Yeah, I want to look like that.’ Then tell them, ‘You have to go the gym for two hours a day and you’re never again going to eat a piece of cake.’ And then they think, ‘What did I sign up for?’ That’s probably why a lot of proof of concepts fail. Everyone gets motivated, a budget is committed to let the team try it out. But without all those other pieces — governance, infrastructure, evaluation, monitoring — it just falls flat. 

Given the arc of technology development, AI models are getting ever better and more intuitive. As a result, would the cost of achieving a high level of AI literacy across an organization go down in the next few years, meaning this will be less of an issue going forward? 

It’s hard to know for sure, but there are trends. The tools themselves are getting cheaper to use, and they’re still really good.

There’s a suite of benchmarks that academics use to evaluate these models, like Humanity’s Last Exam (a language model benchmark consisting of 2,500 questions across a broad range of subjects). If you plot the average intelligence of models over time and the intelligence per dollar, everything is going in the right direction. Models are getting more intelligent. They are getting cheaper per intelligence unit. There’s no plateau yet. 

As well, AI agents will be able to be connected to more and more tools so that the LLM can stop trying to do so much by itself — like booking a flight or looking up a specific detail in the company’s HR policy. That will help, and some of these problems, like hallucinations, will start to go away.

As more people use generative AI, we can make the argument that it will get easier and cheaper for the average person to be literate. Kind of like when computers came out; very few people were computer literate, and now it’s weird to find someone who is not.

In the meantime, though, you have to learn how to master the beast.

Yes. I think there’s value to get in front of the wave and ride it the whole time. The longer you wait, the harder it will get to understand it all.