When AI Agents Are Too Smart for Our Own Good
What is lost when early-career workers no longer need to struggle with challenging assignments?
If you’re a knowledge worker in 2026, you may view AI agents as brilliant interns who never sleep: smart, tireless and still in need of oversight.
But there is another metaphor that captures the tension at the heart of the human-AI working relationship: the overly involved parent who does their child’s school assignments for them. We readily understand how helicopter parents can rob children of essential learning experiences. Christopher Cotton argues that AI agents can do the same, especially for students and less experienced workers still developing expertise.
Cotton, a professor of economics at Queen’s University, and Lydia Scholle-Cotton, a PhD candidate in the Faculty of Education at Queen’s, defined the “AI expertise paradox” in a recent forum article for Issues in Science and Technology. They laid out the main concern: Students and early-career professionals who rely heavily on AI to perform more like experts today are less likely to develop into the real experts of tomorrow. As they see it, “AI creates an illusion of understanding” and undermines the process by which expertise has historically formed.
Cotton fleshed out these concerns in this conversation with Smith Business Insight senior editor Alan Morantz.
You agree with the prevailing view that high-level judgment will remain essential even as AI automates routine knowledge work. But you worry that the pipeline that produces this expertise is at risk. Can you unpack that?
As AI gets better, it’s potentially crowding out more of the work we do daily, whether writing memos or doing baseline analysis. But this doesn’t eliminate the need for exercising judgment in our jobs. This is especially true at the top of organizations. People are still making decisions about what to prioritize or how to approach complex problems.
These senior critical-thinking roles are less at risk of being taken over by AI. Our concern is not that AI is eliminating the need for such jobs. Our concern is that AI will prevent the next generation of workers from developing the expertise and judgment those roles require.
This is a different concern from the usual worries about automation and de-skilling. We know that new technology leads people to lose the ability to do things the old way. But AI isn’t just changing the way we work; it’s changing the way we think.
Learning science tells us that people learn through productive struggle. By wrestling with content, working through aspects we don’t understand, and trying different approaches, we develop a deeper understanding and better judgment. Learning is about more than knowing the answer; it’s about figuring out how to get there.
Over the course of our education and years of work, we slowly build expertise in our field. But AI is getting really good at the routine tasks that students and early-career knowledge workers typically face. It increasingly supplies answers to problems and guides decisions that people used to have to work through themselves. People are increasingly sidestepping the struggle. This isn’t all bad. People in the early part of their careers can be more effective at their jobs in the short run. But they might not be developing the expertise they need to lead their fields in the future.
What does that productive struggle look like, and why is it so hard to shortcut?
In a healthcare setting, medical students consider potential causes of an ailment and alternative treatments to diagnose and address an issue. With AI, they can increasingly shortcut that reasoning process, receiving likely diagnoses and treatments without fully thinking through the problem themselves. They’re less likely to understand the reasons for that diagnosis. They’re just going to “know the answer.”
As AI gets better at diagnosis, this can be good for patients. But it may also create a class of doctors who are more AI technicians than physicians. They may be great at using AI to diagnose common problems, but are incapable of handling rarer cases, leading research or serving as chief clinicians.
In academic research and teaching, graduate students have traditionally learned by wrestling with methods, models and data. Now, students are relying on AI to tell them how to approach problems and what answers to give. They’re not going through the process of deciding which approach, model and interpretation is best. We’re increasingly seeing students confidently present highly complex analyses without having developed a reasonable understanding of what they’re doing. These kinds of analyses often look right on the surface but have serious methodology and judgment problems underneath.
The counterargument is that workers have always adopted new tools and were just fine.
Absolutely. Our AI expertise essay is not a doomsday prediction. By recognizing how AI reliance undermines the development of expertise, both workers and employers can avoid predictable problems. High productivity at AI-assisted work is no longer a sign that someone understands the material. Employers need better ways to identify which employees have real understanding and judgment. And employees who want to advance in their careers need to find ways to avoid letting AI crowd out their expertise development.
It’s tempting to say that the most capable people will still develop expertise and rise to the top. But there’s a game-theoretic problem here similar to the classic prisoner’s dilemma. Consider a highly capable early-career worker with management potential who is rewarded based on their productivity, such as how many briefs they write in law, how many patients they diagnose in medicine, or how many analyses they do in business. They are likely to use AI to be as productive as possible. If they don’t, they risk falling behind less-capable peers who use AI. Such pressure can push heavy AI use even among those who would prefer to develop a deep understanding of their material.
As a society, we haven’t been mindful of this part of the knowledge-building process, beyond saying, ‘You have to pay your dues,’ or ‘Keep working hard and you’ll get ahead’. What we’re saying is, ‘If you keep working hard but actually don’t build deeper understanding, that might no longer be the case’.
There is a tension point in that some of these interventions you propose, such as changing incentives or slowing things down, can disadvantage workers who don’t have the luxury of opting out of AI-assisted productivity.
That is what makes this such a difficult issue. I imagine most knowledge workers will not have the luxury of avoiding AI in their careers. This will likely reduce expertise, but it will not eliminate it. It will be valuable, likely even more so as it becomes rarer. Those who still find ways to develop it in an AI-driven world will be in high demand.
Schools and universities may play an important role in preserving expertise going forward. But this requires realigning incentives. We need to move away from basing grades largely on take-home assignments like term papers and problem sets and instead require students to demonstrate understanding, critical thinking and judgment in the absence of AI. We can preserve space for students to take time and develop these skills.
If we can slow down and focus on understanding rather than on output, AI can actually help us build expertise. I use AI like a sparring partner to stress test new ideas. It can be a Socratic mentor and excellent tutor. But using it in this way requires us to take time to really internalize the material ourselves, rather than letting AI do our thinking for us.