Will It Be AI Automation or Augmentation?
Generative artificial intelligence will help us to be superhuman, if we are willing to adapt to the technology

Will generative artificial intelligence make us irrelevant, elevated or superhuman?
Merely posing this question testifies to the transformative power of the large language models driving the latest iteration of AI. It is already on display. Tools such as ChatGPT, DeepSeek, Claude and Gemini are diagnosing rare diseases by cross-referencing millions of medical records in seconds, hastening design and development, generating new forms of content, and even composing music and screenplays. One study found that 40 per cent of all working hours may be impacted by large language models and AI agents.
Given the changes that are already evident, it is natural to wonder whether Gen AI will make us heroes or chumps. The answer largely depends on the extent to which our collective skillsets will be shaped by automation or augmentation.
A centuries-old debate
The tension between human obsolescence and transcendence via technology is an ancient one. The Assyrian clay tablet and Egyptian papyrus automated recordkeeping, yet scribes evolved into historians. More recently, the printing press threatened the livelihoods of religious and scholarly elites by automating manuscript production, but also made books more accessible and accelerated the spread of ideas.
The Industrial Revolution brought similar upheaval, as mechanized looms and assembly lines displaced artisans and craftsmen. Yet it also expanded production, lowered costs and created new industries that reshaped economies and societies.
Our concern about being displaced by technology evolved from fears about what we do, to how we think, to redefining who we are as humans. The tension between automation and augmentation plays out on multiple levels: the individual level (developing skills or making them obsolete), the organizational level (restructuring or reimagining workflows) and across industries and societies (wiping out jobs or redefining them).
Every technology we have seen, or will come to see, brings both automation and augmentation but in different proportions, depending on how it is leveraged. With generative AI, the balance can be particularly nuanced. As individuals, we can use it for basic automation to perform repetitive, time-consuming tasks, such as quickly drafting or replying to emails. Or we can use AI agents to generate images based on clear prompts, such as “create an illustration of a penguin wearing a suit and top hat.” As these examples illustrate, automation takes over a task without fundamentally changing our skills.
Arguably, automation can enable us to redirect our focus toward strategic activities, but this requires that we possess or can quickly acquire higher-order capabilities. Without them, we risk being replaced by technology. With them, we can leverage automation to elevate our impact.
The path to superhuman
To become superhuman, we will need not only the more strategic form of automation but also augmentation. This is what results when we engage AI as a partner to iteratively exchange ideas and enhance our capabilities and insights. Think of the process as “productive friction.” This process brings together human intuition and machine intelligence to create breakthroughs neither could achieve alone. A single prompt to an AI agent might generate mundane or even flawed ideas, but it might also produce the building blocks of new ideas. When combined with our expertise and intuition, those building blocks spark new insights.
Consider a financial analyst using an AI tool to simulate market crashes. Initially, the tool processes vast amounts of historical data to suggest likely outcomes and risk scenarios. The analyst, however, quickly notices the model overlooks emerging geopolitical risks in Southeast Asia, a critical blind spot. Dynamically interacting with the model, the analyst introduces new risk parameters, prompting the AI agent to adjust its projections. This collaborative refinement leads to a more finely-tuned risk assessment tool, one that better anticipates black swan events by combining the AI agent’s data processing capabilities with human insight.
Similarly, a doctor might use AI to sift through complex datasets for a more accurate diagnosis, or a creative professional might collaborate with AI to reshape a narrative, turning basic outlines into compelling stories.
In each case, the tension and interplay between human insight and AI output catalyzes innovative thinking — a cognitive symbiosis that elevates our capabilities. Embracing this productive friction is essential if we want to harness AI’s full potential and truly become superhuman.

Putting AI agents to the test
Recently, I was able to experience productive friction by partnering with an AI agent on a learning project. In a graduate course I teach on AI and digital transformation in business, I wanted to create a decision-making simulator. This would be a custom GPT (a customized version of ChatGPT trained as an AI agent for a specific purpose) that would allow students to analyze a business case study from multiple stakeholder perspectives. I used Open AI’s ChatGPT to collaborate on the design and testing, and Google AI Studio to cross-validate the customized AI agents, going back and forth with both generative AI tools to refine the simulator.
At the start, I directed ChatGPT to suggest several potential scenarios for the case study. It came back with options that were either off the mark, overly broad, unrealistic or lacking in clear ethical dilemmas. I pushed back, refining the scope and guiding the AI toward a scenario that best fit my objectives.
To ensure the case was rich in detail, I wrote an initial draft of the scenario, outlining the business context and learning objectives, then worked with ChatGPT to refine it. With each revision, the AI agent challenged me to think deeper, asking for clarifications and suggesting ways to make the case more dynamic, such as introducing a media scandal to intensify the ethical stakes.
With the case narrative in place, the next step was to define the key stakeholders whose perspectives students would adopt. ChatGPT initially suggested five roles, two of which I decided to drop to avoid overcomplicating the case. The remaining three — a CEO, a data scientist and an ethics officer — were plausible stakeholders, but the power balance was off. I realized a data scientist could lack the authority to push back against the CEO’s strategy, making the stakeholder dynamic less engaging. So, I challenged the AI agent’s suggestion: “Can we elevate the data scientist’s role so they have more strategic influence?” ChatGPT proposed a chief data and AI officer (CDAO), giving the role a broader scope beyond technical execution. This small but critical shift made the case more realistic and deepened the decision-making complexity. I refined the title further, relabeling it CDAIO.
To ensure that each customized AI agent embodied its stakeholder’s mindset, I developed a template for role definitions, such as name and title, primary responsibilities, top priorities in ranked order, main concerns, access to information and constraints. ChatGPT helped me refine the details of these items and suggested new ones to include in the definition.
I then created the AI agents in collaboration with ChatGPT and tested them to ensure the fictional executives responded authentically and strategically to stakeholder concerns, reflecting real-world corporate behaviour.
Finally, I cross-validated the AI agents with Google AI Studio by simulating role-play interactions. I fed Gemini, Google’s AI assistant, a snapshot of each stakeholder’s persona, asked it to observe interactions and propose refinements, and tested reactions to new questions (such as “How would the CEO respond to this question?”).
One key insight emerged: The ethics officer’s responses initially sounded too idealistic. She needed to be more pragmatic, framing ethics as risk management rather than only a moral dilemma. Adjusting her prompt made her more effective in corporate decisions.
A new form of collaboration
This process of human-AI augmentation led to a design that was creative, interactive and user-friendly. The end product will prepare students for the complexities of modern business, where AI and ethics are increasingly intertwined. Throughout this process, I realized it was not just about building customized AI agents; it was about creating a new way of learning. The back-and-forth pushed me to think more carefully and deeply, expanding my viewpoint. The result was something neither I nor the AI assistant could have created alone.
This is what being superhuman means (though I believe I am still human): Not outsourcing tasks or thinking to AI, but engaging it as an iterative collaborator.