Deploying AI in healthcare: separates the hype from the useful

We are excited to bring Transform 2022 back in person July 19th and virtually July 20th – 28th. Join AI and data leaders for informative talks and exciting networking opportunities. Register today!


Of all the industries that AI is romanticizing, health care organizations are perhaps the most plagued. Hospital managers hope that one day, AI will perform administrative tasks for healthcare, such as scheduling appointments, introducing disease severity codes, managing patients’ laboratory tests and referrals, and remotely monitoring and responding to the needs of entire patient groups as they go with their daily lives.

By improving efficiency, safety and access, AI can be of great benefit to the healthcare industry, says Nigam Shah, Professor of Medicine (Biomedical Informatics) and Biomedical Data Science at Stanford University and an affiliated faculty member of the Stanford Institute for Human – Centered Artificial Intelligence (HAI).

But caveat emptorShah says. Buyers of healthcare AI should not only consider whether an AI model will reliably deliver the correct output – which was the primary focus of AI researchers – but also whether it is the right model for the task ahead. “We need to think beyond the model,” he says.

This means managers need to consider the complex interaction between an AI system, the actions it will take, and the net benefit of using AI compared to not using it. And before managers can bring any AI system on board, Shah says, they need to have a clear data strategy, a way to test the AI ​​system before buying it, and a clear set of criteria to evaluate whether the AI system will achieve the goals the organization has set for it.

“In the deployment, AI should be better, faster, safer and cheaper. Otherwise it is useless, ”says Shah.

Shah will lead a Stanford HAI management training course this spring for senior healthcare executives called “Safe, Ethical and Cost-Effective Use of AI in Healthcare: Critical Topics for Senior Leadership” to delve into these issues.

The business case for AI in healthcare

A recent McKinsey report outlined the different ways in which innovative technologies such as AI are slowly being integrated into healthcare business models. Some AIs will improve organizational efficiency by performing rotating tasks, such as assigning severity codes for billing. “You can have one read the graph and take 20 minutes to assign three codes, or you can have a computer read the graph and assign three codes in a millisecond,” he says.

Other AI systems can increase patient access to care. For example, AI systems can help ensure that patients are referred to the right specialist, and that they receive key tests prior to an initial visit. “Too often, patients’ first visits to specialists are wasted because they are told to go for five tests and return in two weeks,” says Shah. “An AI system can short it out.” And by skipping these wasted visits, doctors can see more patients.

AI can also be beneficial to health management, Shah says. For example, an AI system could monitor patients’ medication orders, or even supervise patients in their homes with a view to impending deterioration. So-called hospital-at-home programs may require more nursing staff than there are supplies, Shah says, “but if we can put five sensors in the house to give early warning of a problem, such programs become feasible.”

When to deploy AI in healthcare

Despite widespread potential, there are currently no standard methods for determining whether an AI system will save money for a hospital or improve patient care. “All the guidance people or professional associations have given is about ways to build AI,” says Shah. “There was very little left if, how or when to use AI.”

Shah’s advice to managers: Define a clear data strategy, have a plan to try before you buy, and set clear benchmarks to evaluate whether deployment is beneficial.

Define a data strategy

Because AI is just as good as the data it learns from, managers need to have a strategy and staff to collect diverse data, properly label and clean that data, and maintain the data on an ongoing basis, says Shah. “Without a data strategy, there is no hope for successful AI deployment.”

For example, if a vendor sells medical imaging software, the purchasing organization must have a substantial set of retrospective data on hand that it can use to test the software. In addition, the organization must have the ability to store, process and annotate its data so that it can continue testing the product again in the future, to ensure that it is still working properly.

try before you buy

Healthcare organizations need to test AI models on their own sites before buying and making them operational, Shah says. Such tests will help hospitals separate snake oil – AI that does not meet its requirements – from effective AI, as well as help them determine if the model is appropriately generalizable from its original site to a new one. For example, says Shah, if a model is developed in Palo Alto, California, but is deployed in Mumbai, India, there should be some testing to determine if the model works in this new context.

In addition to checking whether the model is accurate and generalizable, managers will need to note whether the model is really useful when deployed, whether it can be implemented smoothly in existing workflows, and whether there are clear procedures to monitor how well the AI post-deployment work. “It’s like a free pony,” says Shah. “There may be no cost to buy it, but it can be a huge cost to build it a barn and run it for life.”

Establish clear criteria for deployable AI

Buyers of AI systems should also evaluate the net benefit of an AI system to help them decide when to use it and when to turn it off, Shah says.

This means that issues such as the context in which an AI is deployed, the possibility of unintended consequences, and the healthcare organization’s ability to respond to an AI’s recommendations should be considered. For example, if the organization is testing an AI model that predicts readmissions of discharged patients and it flags 50 people for follow-up, the organization should have staff available to do that follow-up. If this is not the case, the AI ​​system is not useful.

“Even if the model is built right, given your business processes and your cost structure, it may not be the right model for you,” says Shah.

Ripple effects of AI in healthcare

Finally, warns Shah, managers need to consider the broader consequences of AI deployment. Some uses can displace people from prolonged work, while other uses can supplement human effort in a way that increases access to care. It is difficult to know which impact will occur first or which will be more significant. And eventually, hospitals will need a plan for retraining and retraining displaced workers.

“While AI certainly has a lot of potential in the healthcare environment,” says Shah, “to realize that potential it will require to create organizational units that manage the data strategy, the machine learning model lifecycle and the end-to-end delivery of AI. in the care system. ”

Katharine Miller is a Contributing Writer for the Stanford Institute for Human-Centered AI.

This story originally appeared on Hai.stanford.edu. Copyright 2022

DataMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the tech people who do data work, can share data-related insights and innovation.

If you want to read about the latest ideas and updated information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more about DataDecisionMakers