Skip to main content

Six Ways to Ease AI-Induced Stress

|

Employees are struggling to adapt to an uncertain and troubling new era of technology

An illustration of brain and data matrix
iStock/gremlin

In June 2018, GPT-1 was released with little fanfare. Eight months later, its sibling model, GPT-2, showcased significantly improved language generation capabilities but was scaled down due to concerns about potential misuse. In June 2020, GPT-3 was launched, representing a substantial leap in language understanding and generation performance. It inspired the development of artificial intelligence (AI) tools for a wide range of tasks such as coding, design, photo editing and video production, writing, analytics, education, health management, research, shopping and translation. 

Doubts regarding GPT-3’s transformative potential were quickly replaced by anxiety when GPT-4 was demonstrated in March 2023. The demonstration showcased how simple it was to generate a web page based on scribbles – a job generally perceived as requiring not only technical skills but also creativity. 

Elsewhere, Microsoft introduced Microsoft 365 Copilot and Google unveiled Duet AI in Google Workspace, marking AI’s entry into everyday office work. Outside the office scene, an AI-generated song mimicking the voices of Drake and The Weeknd garnered millions of views on TikTok while an AI-generated portrait of Edmond Belamy sold for $432,500.

While AI continues to astound with its capabilities, it has also reignited concerns about the potential replacement of human workers. Once relegated to science fiction novels, this fear now has real-world significance. At the time of GPT-4’s release, thousands of experts and technology leaders signed a petition calling for a pause on the development of systems more powerful than GPT-4, while regulators and tech luminaries engaged in incessant debates with no clear consensus on how AI should be governed. 

Uncertainty and the speed of change 

The speed at which technology is advancing, the wide range of domains it impacts, the profound disruption it causes to traditional work paradigms, and the resulting social, economic and political turmoil are no less significant than the urgent transition from offline to online work during the COVID period. 

Both academic research and business media report that people are experiencing increasing stress induced by AI. Employees in user organizations are fearing potential job displacement or struggling to navigate the complexity of a technology that was previously beyond their reach. People in provider organizations are anxious about potential social backlash and regulatory uncertainty; they also face burnout due to the pressure to ensure responsible AI practices. 

The stress induced by AI is a distinct form of “technostress,” which refers to the negative psychological effects stemming from information technologies. Technostress encompasses the difficulties individuals encounter in adapting to and managing these technologies, which can have adverse effects on their well-being and performance. 

Technostress triggers 

Based on research to date, AI can create employee stress in the following ways: 

Job security concerns: Fears about potential job loss, the need to demonstrate value compared to automated systems and the challenge of adapting to new roles that require AI-related skills. 

Outsized accountability: Feelings of unease and mistrust arising from the intricacies of AI’s internal workings and being responsible for inaccurate, unfair or unethical AI-driven outcomes. 

Decreased self-efficacy: Being overwhelmed by constant demands to update skills and knowledge to effectively engage with AI technologies, and fear of being left behind in the face of rapidly advancing AI. 

Extra workload: Cognitive and emotional burdens associated with the increased time and effort required to learn AI systems and adjust to new work processes. 

Missing supports: Anxieties around the ability to adapt to evolving and still ambiguous work standards.

Privacy invasion: Apprehensions about privacy infringement and biased decision-making. 

The stress associated with further adoption of AI systems warrants significant attention from organizational leadership. Research provides concrete evidence that heightened technology-related stress leads to diminished well-being and ultimately affects work performance. 

It’s Time for a Sober View of AI
Readers Also Enjoyed It’s Time for a Sober View of AI

Anti-AI stress strategies 

Organizations implementing AI tools to enhance business performance must be mindful of how AI can contribute to employee stress. They should attentively monitor and proactively alleviate stressors by considering the following strategies: 

Communicating with employees on the organization’s AI vision 

When employees are uncertain about the organization’s stance on AI and its intentions regarding adoption and utilization, they often gravitate towards worst-case scenarios. Consistent communication with employees about the company’s AI vision can help alleviate employees’ tendency to overthink and overreact, while fostering support for well-planned and effective implementation of AI tools. 

Prioritizing the design and use of explainable AI 

Explainable AI encompasses a range of processes and methods that enable users to understand and trust the outcomes produced by AI algorithms. The Explainable Machine Learning Challenge has demonstrated that black-box AI is not always the sole solution. In cases where fully interpretable alternative algorithms are available, opting for them instead of black-box AI can provide employees with a greater sense of control over their work outputs. 

Fostering an innovative culture willing to tolerate failure 

We are in a transitional period characterized by a series of trials and errors. Representative AI systems, such as ChatGPT and Bard, are not immune to errors, and the best practices for designing and implementing AI are swiftly becoming outdated, either due to technological advancements or the discovery of new limitations. Organizations should not anticipate a seamless adoption of AI tools and should refrain from placing the sole responsibility on employees when issues arise. 

Redesigning accountability for the uniqueness of AI systems

AI systems have the potential to make errors or yield unforeseen outcomes. Redesigning the accountability system can help delineate the roles and responsibilities of individuals engaged in the development, deployment and monitoring of AI systems. This clarity fosters a fair distribution of accountability and recognition, alleviating the stress felt by employees who are concerned about unjust repercussions for AI-related failures. It also reduces ambiguity and the stress associated with uncertainty. 

Providing comprehensive training and support to employees 

AI technologies are evolving rapidly, and providing regular training enables employees to stay ahead of the latest advancements. By investing in the ongoing development of their employees, organizations cultivate an environment where individuals can readily adapt to changes, embrace new opportunities and alleviate any anxiety related to becoming obsolete. 

Increasing transparency of AI use and data collection 

Transparency in the use of AI and data collection cultivates trust among employees. When individuals understand how AI systems are employed and how their data is collected and utilized, it instills confidence in the organization’s intentions and practices. Such transparency mitigates uncertainty about the potential misapplication or mishandling of information. 

As organizations continue to embrace the transformative power of AI, they must make it a priority to resolve the challenges associated with AI-induced stress. By acknowledging the unique stressors that arise from AI adoption, organizations can follow a human-centric approach that places the well-being of their employees at the forefront. The result will be a more healthy, supportive and productive work environment. 

Gongtai Wang is an assistant professor in digital technology at Smith School of Business.