Healthcare Debates: The Ethics of Artificial Intelligence in Healthcare

The use of artificial intelligence in healthcare dates back to 1972 when Stanford University deployed an AI-powered prototype program for treating blood infections.

Today, AI helps make early diagnoses of cancer and Alzheimer’s. It interprets medical imaging better than some highly trained doctors. A KPMG survey on healthcare AI found 89 percent of survey respondents reported that AI was already creating efficiencies in their systems and 91 percent believed it was increasing patient access to care. The future, by many accounts, is blindingly bright: the amount healthcare spends on AI is forecast to grow from $2 billion a year to over $36 billion by 2025.

But basic forms of AI are a gateway drug to advanced forms of AI. Already, doctors have voiced their concerns at the way tech tools can become tech crutches, and, at times, impair human decision-making. The stakes are much higher in healthcare than they are in the world of business; here, a poorly designed AI solution won’t just result in lost profits, but lost lives.

Artificial intelligence is most definitely a part of the future of healthcare—it’s already a rapidly-growing part of its present—but extreme caution must be exercised in its development and deployment.

Three Useful Applications of AI in Healthcare

Improved Diagnostics

The most promising use cases of AI in healthcare revolve around improved diagnostics. According to a study published in Nature, AI-powered deep neural networks can diagnose skin cancer with more accuracy than a dermatologist.

Separately, Google Health reported that an AI model deployed through DeepMind was more effective screening for breast cancer than human doctors. Studies show it led to a reduction of false positives, a reduction of false negatives, and a reduced workload of 88 percent. By diagnosing critical illnesses earlier and more effectively, the healthcare system will see better patient outcomes and a more optimal distribution of resources.

Actionable Data

The US healthcare system generates about a trillion gigabytes of data every year. The introduction of new medical devices, personal fitness trackers, and health IoT promises to inflate the number even further. While hurdles to interoperability still exist—some data remains disaggregated and siloed off between different software platforms and departments—renewed efforts exist to make healthcare data fluid between providers, payers, and patients.

Epic, one of the world’s largest EHR vendors, exchanges over 100 million patient records per month. Sharing this data and leveraging it through machine learning holds immense promise for further advancements in patient care: not only can it make use of the data, but it’s better at reading unstructured data than humans are.

Increased Efficiency

By 2050, one in four Europeans and North Americans will be over the age of 65, resulting in an immense burden on the healthcare system. The healthcare workforce simply can’t keep pace. Empowering that workforce with AI-enabled solutions, however, can be a force-multiplier.

IBM Watson Health’s AI-enabled CareDiscovery benchmarks healthcare facilities and analyzes key metrics like length of stay, mortality, and readmissions. At DeKalb Medical, a 627-bed non-profit health system in Atlanta, it helped reduce health complications by 58 percent, lower costs by $12 million, and save 55 lives over a three-year period. That’s just the beginning.

Further automation of repetitive, low-skill tasks—and even some specialized tasks in medical imaging analysis—will allow physicians to apply their skills where they’re needed most.

Three Challenges for AI in Healthcare

Biased Algorithms

Human beings are biased creatures, and that bias extends to the frameworks and systems they create. The problem is systemic: a biased history creates a biased future. And much of the data that’s used to train AI and machine learning algorithms is drawn from data sets that are predominately white and male.

A 2016 analysis of over 2,500 genome-mapping studies found more than 80 percent of all participants were of European descent. An AI research project at MIT that dealt with facial recognition performed poorly on dark-skinned women. These aren’t outlier events but rather indicative of a wider problem. When those data sets are used in healthcare, where the stakes are high, it can lead to increased health risks for already-disadvantaged segments of the population.

High Costs

A survey by KPMG found that the adoption of AI is impeded by high costs, privacy risks, and a lack of workforce training. Furthermore, it found that 54 percent of survey respondents believe that AI use to date has actually increased (rather than decreased) the overall cost of healthcare.

This is problematic for a number of reasons:

  • Firstly, healthcare organizations may find themselves pouring more and more cash into AI, chasing an elusive ROI figure that never materializes.
  • Secondly, the desire to achieve that ROI figure may mean deploying AI solutions that aren’t ready to be deployed, or ones that aren’t ethically designed.
  • Finally, that cash being dumped into AI, which is also, in some cases, raising the overall cost of healthcare, could’ve been funneled into more solidified ways to improve patient outcomes and access to care.

Security Concerns

When you mix healthcare with data, one of the primary concerns is security. A KPMG survey showed 75 percent of respondents concerned AI could threaten the security and privacy of patient data.

It’s not an unfounded fear. A study in Science found that small tweaks to datasets could potentially cause healthcare AI systems to perform ‘adversarial attacks’ that hurt, rather than help, the public. In some cases, the AI could even arrive, confidently, at precisely the opposite conclusion from reality. A few pixels make all the difference between a positive diagnosis and a negative one. The study’s authors highlight that the hypothetical enemy here isn’t even necessarily hackers, but doctors, hospitals, or other organizations that may seek to manipulate AI software to gain an advantage in billing systems or reimbursement protocols.

The Bottom Line: Guidelines for AI in Healthcare

When artificial intelligence is debated in a general sense, its ethical considerations can veer into overblown science fiction. When it comes to artificial intelligence in healthcare, that’s not the case. AI is active in healthcare already, and it needs processes, frameworks, and organizational cultures that put patients first. That requires the collaboration of IT developers, healthcare leaders, AI researchers, and governmental entities. For healthcare administrators in particular, it is increasingly important to be bilingual, speaking the languages of AI and healthcare fluently.

Healthcare AI will also need to prioritize privacy and security over output and efficiency. Of note is that DeepMind’s ability to outperform doctors in screening for breast cancer needed only x-ray results, and not patients’ health histories (which it had access to). Black box AI solutions may be cheap and efficient, but a further commitment to transparent, auditable AI systems can prevent bad actors from manipulating health data.

The EU’s Ethics Guidelines for Trustworthy AI establishes seven key requirements for AI deployments:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Environmental and societal well-being
  • Accountability

Reports from McKinsey and KPMG prescribe similar requirements for using AI in healthcare. Of all the key metrics utilized by healthcare AI, patient access and equitable health outcomes must be the main barometers of success and preserve patient dignity such that patients have autonomy when making healthcare decisions for themselves.

The evolution of AI in healthcare isn’t a question of how fast solutions can be implemented. At this point, AI is running on its own steam. The puzzle for healthcare leaders is how to slow down the process of development and deployment until AI solutions can be made equitable, efficient, and beneficial to those they serve.

Matt Zbrog
Matt Zbrog

Matt Zbrog is a writer and researcher from Southern California. Since 2018, he’s written extensively about emerging issues in healthcare administration and public health, with a particular focus on progressive policies that empower communities and reduce health disparities. His work centers around detailed interviews with researchers, professors, and practitioners, as well as with subject matter experts from professional associations such as the American Health Care Association / National Center for Assisted Living (AHCA/NCAL) and the American College of Health Care Executives (ACHCA).

Related Posts

  • 16 July 2018

    How Big Data Shapes Healthcare Delivery

    Industry experts use cutting edge technologies and theories to improve the timeliness, accuracy, and accessibility of healthcare delivery and one of the most efficient ways to achieve this goal in the 21st century is with big data.

  • 23 January 2024

    Greening the Healthcare Sector: How Hospitals Can Reduce Emissions

    In late 2015, nearly 200 governments worldwide signed a landmark action plan known as the Paris Agreement. After decades of blame-shifting, disorganization, and avoidance, there was finally a formal acknowledgment of the shared nature of climate change and a unified effort toward tackling the mounting crisis.

  • 14 September 2023

    Medical Mistrust: Organizational Approaches to Increasing Patient Confidence

    Medical mistreatment and the mistrust it engenders isn’t confined to history, nor is it limited to the Black population: today, women, people of color, Native Americans, and members of the LGBTQIA+ community experience minor or major discriminations that justifiably leave them distrustful of traditional healthcare services.

  • 13 September 2023

    Collaborative Skills in Healthcare Administration

    In the healthcare industry, collaboration is key. For doctors, nurses, and administrative professionals, collaboration is essential to providing the highest quality of care. However, collaboration only happens with thoughtful intervention from healthcare administrators. They are responsible for developing those skills personally and teaching their staff how to work together. Working collaboratively can have a significant impact on patient outcomes.

  • 31 July 2023

    Basics of Machine Learning for Rising Healthcare Leadership

    Machine learning (ML) refers to a set of computational algorithms that apply statistical modeling to a specific task. Think of a task as a question or an input. The algorithm uses logic applied to that question to generate an answer or output. To emphasize the distinction between AI and ML, remember that AI refers to computational systems which mimic human behavior—ML refers to specific types of algorithms. Systems are built from algorithms; algorithms work inside of systems.