Skip to main content

Building Medical AI That Heals, Not Divides

|

A radiologic technologist’s near-miss reveals the promise and pitfalls of artificial intelligence in healthcare settings — and why the human element remains irreplaceable

View though MRI machine of nurse talking to a patient. Human-centred care concept.
iStock/FG Trade Latin

While performing a routine chest X-ray on a patient in my role as a radiologic technologist, I initially overlooked what was a dark density on the image and discharged the patient. After further review, I noticed the dark density but was unsure if it was internal to the patient or an artifact (something external to the patient that can be removed). Still uncertain, I called the patient back and asked if she had anything in her pocket. When she confirmed that she did, the chest X-ray was repeated. The dark density was no longer present on the new image.

This made me reflect on a troubling possibility: If I had not repeated the exam, the radiologist would have misdiagnosed the dark density as a lung nodule, possibly indicating early stages of lung cancer. Adverse patient safety events like this are not uncommon. As a student of artificial intelligence, this got me thinking: Could AI be trained to detect these types of mistakes and prevent such safety events? How do we, as humans, interact with artificial intelligence in ways that promote the common good in healthcare, where decisions can have life-altering consequences?

AI has certainly made great strides in healthcare. It is being used in predictive analytics, for example, to track the spread of viruses. AI is being trained to read thousands of radiology images and detect abnormalities with high precision. The benefits are enormous. CT scans, MRIs and X-rays that take a significant amount of time to interpret can now be read in seconds. This leads to quicker diagnosis and treatment and improves patient outcomes.

But the implementation of AI in healthcare presents a dilemma. What happens when a machine outperforms a human? Who bears responsibility if the AI agent makes a mistake? Can we trust AI with decisions that demand not only intelligence but compassion? Studies have shown that while AI can support decision making, the optimal collaboration between AI and medical professionals remains an area of ongoing research.

The human judgment that AI lacks

As a frontline healthcare worker, I have worked at the intersection of patient-centred care and medical imaging science. I have seen how AI tools can assist in flagging critical findings, suggesting protocols and improving department workflow. AI cannot, however, replace the judgement, experience and context-driven decisions made by clinicians.

Machine learning models can analyze large datasets, but they cannot feel. An algorithm can be coded to detect a tumour, but it cannot explain a life-altering diagnosis with sensitivity to a patient and their loved ones. AI can suggest a treatment, but it cannot hold a patient’s hand. The emotional, cultural awareness and moral judgment required in healthcare are inherently human.

Returning to my earlier example, could AI have distinguished that the dark density on the patient’s lung chest x-ray was an external artifact? Would it have misdiagnosed it as a lung nodule? My own clinical intuition, brought about by years of experience, led me to believe that the dark density was not a lung nodule. It allowed me to follow my instincts and call the patient back to have the imaging repeated. No algorithm could have made the same call. AI is a good tool to use, but it cannot replace human experience.

The role of AI is not to replace healthcare providers, but to empower them. If implemented properly, AI can act as a powerful partner that assists in reducing diagnostic errors. This will allow healthcare providers to focus on the most important thing — patient-centred care.

In imaging departments, for example, AI can pre-analyze scans and highlight potential areas of concern, which radiologists can then review with their expertise. This human-in-the-loop model ensures the final judgment incorporates both computational precision and clinical insight.

AI also helps reduce burnout by streamlining administrative burdens and repetitive tasks. Rather than viewing AI as a threat to autonomy, healthcare professionals should see it as an extension of their capabilities, a second set of eyes, not a replacement for their own.

Nowhere is this more evident than in the administration of contrast media for CT scans (X-ray dye). When X-rays attenuate through the different anatomical structures of the patient’s body, they produce different shades of grey on the display screen. Distinguishing these grey shades is central to accurate diagnosis. Sometimes, however, adjacent anatomical structures appear as the same shades of grey. Contrast media helps clarify these differences, making pathologies like cancer more visible. But not all patients can receive contrast due to severe allergic reactions. AI can help distinguish subtle differences in grey shades on CT scans and potentially eliminate the need for contrast — ensuring that patients who cannot tolerate contrast media still receive accurate diagnoses.

Leadership for healthcare equity

Despite all its benefits, AI in healthcare is not perfect and has many downsides. AI is dependent on data. If the data is skewed or non-representative, AI systems can perpetuate existing health disparities. Diagnostic algorithms could perform less accurately for certain ethnic groups due to biased training datasets.

Ensuring equity in AI requires diverse and inclusive data collection, transparent algorithms and rigorous human oversight. This is where good leadership is crucial, both in healthcare institutions and policymaking bodies.

Healthcare administrators, clinicians and AI ethicists must collaborate to set standards that prioritize fairness, inclusivity and accountability. Building AI for the common good means embedding ethics and equity into every stage of design, deployment and evaluation.

We are at a pivotal moment in history. AI has the capacity to redefine healthcare in ways never done before. It can make it more efficient, accurate and responsive. But the true measure of progress lies not in technological capability but in human-centred outcomes.

When we design AI systems that support, not replace, human professionals, and when we ensure they serve the needs of all patients, not just the privileged few, we advance toward a future that is both intelligent and compassionate.

The question is not whether AI will shape the future of healthcare. It already is. The real question is: Will we guide it to heal, or will we let it divide? 

Najib Tasleem is a Master of Management Analytics student at Smith School of Business. A version of this essay originally appeared in Global Voices magazine, a publication of the Council on Business and Society. Smith School of Business is one of 12 business schools in this global partnership that seeks evidence-based solutions to large-scale issues.