Getting Started with AI for Healthcare Leadership
Even casual news readers have noticed the recent surge in content referencing artificial intelligence (AI) taking over healthcare. With headlines that breathlessly report breakthrough AI being deployed to battle everything from sepsis to stroke, and outperforming doctors in detecting breast cancer, it’s easy to believe our healthcare system has already reached a very futuristic place—and that all the problems historically plaguing the sector have finally been solved. At last, healthcare is affordable, infallible, and accessible to all, thanks to AI!
Well, not quite. The picture being painted by news headlines is different from the reality of where we stand in the real world. That’s not to say that incredible innovations and improvements aren’t coming. Indeed, the potential of AI and the machine learning (ML) algorithms used to power it are incredible. Arguably, implementing these technologies will quickly become all but necessary for healthcare organizations to survive.
The chasms that lay between the headlines and reality are many, but notably, the timelines are a lot longer than the news seems to suggest. AI is not an easy technology to implement, even on a limited scale for a controlled pilot. The processes to evaluate, implement, validate, and scale AI tools within a healthcare application are especially complex and resource-intensive.
Part of this is due to the unique risks inherent to healthcare delivery (i.e., “Do no harm”). Another part of this is due to the large upfront investment costs of such technology and the requisite security and privacy measures and tools in parallel.
Finally, and perhaps most interestingly, a large part of the complexity is derived from the nature of the ML algorithms powering AI themselves and their less-than-transparent nature. This is responsible for much of the hesitancy in attitudes held by medical providers and by healthcare leadership alike, which has slowed more widespread adoption.
Shifting Attitudes Towards AI
Healthcare leaders have probably heard physicians using the term “black box” when referring to technologies whose inner workings are not fully transparent (which health technology companies have historically justified by claiming they are proprietary algorithms that cannot be shared). For years, physicians overall have remained wary of welcoming any black box technologies over their own judgment—even when the upside to their adoption is of potentially great value.
If these older technologies are “black box,” then AI and machine learning algorithms are a “Vantablack box.” Not surprisingly, the pushback from providers, in particular, has proven quite challenging to overcome.
The box is not growing any more transparent, and the technology inside of it continues to become more complex. As machine learning algorithms incorporate an ever-increasing number of variables from larger and larger datasets, leveraging a growing number of hidden layers, it is becoming increasingly challenging for even the most experienced statisticians to fully understand how they work, let alone explain them to providers and healthcare leadership in ways they can understand to the level of decision-making.
Ultimately, providers, leadership, and statisticians will need to come together to make decisions about which technologies to implement within their own organizations. But first, there is a critical learning gap that must be closed. Right now, the logical first step for healthcare leadership is to understand the terminology; how it’s used, and (frequently) misused.
Artificial intelligence and machine learning, for example, are often used interchangeably. This is inaccurate. While they are very much related, AI and ML are different. This article will focus on AI, while future writings will explore ML, and related concepts.
Defining Artificial intelligence
Artificial intelligence (AI) refers to the pursuit of developing computational systems that operate the way a human brain would operate in performing tasks.
Basically, AI is like a software program that behaves the same way that you (a human being) would behave—but within a limited environment and with limited tasks. It’s akin to a job description, but for an algorithm instead of an employee.
The complexity of AI can be great if there are a lot of different tasks within its environment that it is intended to perform. AI can also be quite simple, when the tasks are few and linear in their logic.
While we imagine most AI as the former, the reality is that most applications right now are closer to the latter. Both sides of the spectrum are still technically artificial intelligence. So long as the computational system is being used to fulfill a set of tasks that historically would have been performed by a human applying their human brain logic, then that constitutes AI.
Chatbots: A Prime Example of AI in Action
Consider the role of a customer service employee who responds to customer questions via an online chat portal found on the company’s website. The actions of the customer service representative are determined by the company policies, on which the rep was trained before interacting with real customers. There are limited responses the rep is permitted to use as their reply to the customer’s questions and requests.
Those protocols the representative must follow are the same kind of logic that AI technology would employ. The customer generates the computational “inputs” by sending a message. AI applies the computational logic within its programming and generates the response “output.”
Even Simple AI Becomes Complex Within Healthcare
There are opportunities for leveraging AI in healthcare. One of the simplest would be to triage the first line of incoming patient communication, just like the customer service representative example above. Logically, expediting phone triage would allow staff to redirect their efforts to higher value-added activities and provide more time for handling complex communications with patients and other staff.
Importantly, there are risks inherent to care provision communications, as leadership is well aware. Systems must coordinate AI with live staff activity for a cohesive end-user experience, and all engagement must take place with appropriate oversight protocols for safety.
Ironing out just those two execution action items is a lot of work. It’s also easy to get the balance wrong on what to automate and what not to. Too conservative, and you end up with too limited of a system that escalates nearly everything to live staff to handle, ultimately increasing staff workload. Too lax of a system that fails to escalate something truly urgent, and you kill patients. Neither outcome is desirable.
It is a delicate balance and a very simple example of how even basic AI adoption within healthcare requires a comprehensive assessment of risk and benefit, as well as a deep dive into protocols that historically have relied a great deal on common sense when training staff.
Healthcare Leaders Must Lead, Educate, and Align
Most healthcare leaders can acknowledge the latest study of AI supposedly outperforming trained radiologists for detecting breast cancer; however, the investments in and applications of AI within their own organizations may be murkier. Some organizations have established a centralized innovation or technology hub, where discussions on these technologies are taking place regularly, and a strategy to align with the traditional IT departments and coordinate roles is being developed. These are important early steps to take.
They don’t even begin to answer the bigger questions, though:
- Which AI innovations being tested now within research will add real value and improve the bottom line in actual clinical practice?
- What does appropriate risk management look like, and for which patient populations?
- How will risk management for providers be impacted by introducing AI into workflows?
- How can risks and gains alike be measured accurately?
- What is a realistic timeline for the process when preliminary outcomes differ from expected?
Healthcare leadership will hold the overarching responsibility for making these decisions and will be held accountable for their outcomes. They will have to navigate trade-offs in the face of complex scenarios and many more unknowns than knowns. The challenges of responsible IT adoption will be greater than any other that the healthcare sector has ever encountered.
Organizations that fail to invest in AI technologies to improve the bottom line will fall behind and may reach insolvency within the coming years. For leadership, critical conversations about these technologies begin with a thorough understanding of the terminology and how it is frequently misused, too.
Healthcare leaders must learn this now, in tandem with statisticians, physicians, and other experts. Together, they must figure out how to disseminate pertinent information throughout the organization to the right people at the right time points. An overly siloed approach to building up the necessary knowledge will not be effective. Dissemination must be done in terms that are meaningful, accurate, and relevant to staff, tailored to the wide variety of roles throughout the organization.
Healthcare leadership has a critical responsibility to ensure this transformational change is handled with care; they will determine how such decisions ultimately are to be made. Understanding the capabilities, the limitations, the risks, and the growing body of demonstrated use cases for these technologies is just the start.
While there are many unknowns, there are few things that are certain. Perhaps the most important is the reality that artificial intelligence within healthcare is not just a buzzword and certainly not a fad. It is real, and it is coming.
To ignore or refuse the adoption of these innovative technologies will all but ensure the entire organization’s survival is increasingly at risk. The time for healthcare leaders to start learning about artificial intelligence is now.