AI is changing medicine: this is how we make sure it works for everyone

We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!


What if your doctor could immediately test dozens of different treatments to discover the perfect one for your body, your health, and your values? In my lab at Stanford University School of Medicine, we’re working on artificial intelligence (AI) technology to create a “digital twin”: a virtual representation of you based on your medical history, genetic profile, age, ethnicity, and a host of other factors, such as whether you smoke and how much you exercise.

When you’re sick, the AI ​​can test treatment options on these automated twins, running through countless different scenarios to predict which interventions will be most effective. Rather than choosing a treatment plan based on what works for the average person, your doctor can develop a plan based on what works for you† And the digital twin continuously learns from your experiences and always processes the most current information about your health.

AI is personalizing medicines, but for which people?

While this futuristic idea may sound impossible, artificial intelligence could realize personalized medicine faster than we think. The potential impact on our health is huge, but so far the results for some patients are more promising than others. Because AI is built by humans using data generated by humans, it is prone to reproducing the same biases and inequalities that already exist in our healthcare system.

In 2019, researchers analyzed an algorithm that hospitals use to determine which patients should be referred to special care programs for those with complex medical needs. In theory, this is exactly the type of AI that can help patients receive more targeted care. However, the researchers found that as the model was used, black patients were significantly less likely to be assigned to these programs than their white counterparts with similar health profiles. This biased algorithm impacted not only the health care that millions of Americans received, but also their trust in the system.

Getting data, the building block of AI, good

Such a scenario is all too common among underrepresented minorities. The problem is not the technology itself. The problem starts much earlier, with the questions we ask and the data we use to train the AI. If we want AI to improve healthcare for everyone, we need to get those things right before building our models.

First, there’s the data, which is often skewed toward patients who use the health care system the most: white, educated, wealthy, cisgender U.S. citizens. These groups have better access to medical care, making them overrepresented in health datasets and clinical trials.

To see the impact this skewed data has, look at skin cancer. AI-powered apps can save lives by analyzing photos of moles and alerting them to anything they should have checked by a dermatologist. But these apps are trained on existing catalogs of skin cancer lesions that are dominated by images of fair-skinned patients, so they don’t work as well for darker-skinned patients. The predominance of fair-skinned patients in dermatology has simply been transferred to the digital world.

My colleagues and I encountered a similar problem while developing an AI model to predict whether cancer patients undergoing chemotherapy will eventually come to the emergency room. Physicians could use this tool to identify patients at risk and provide them with targeted treatment and resources to avoid hospitalization, improving health outcomes and reducing costs. While our AI’s predictions were promising, the results were not as reliable for black patients. Because the patients represented in the data we entered into our model did not include enough black people, the model was unable to accurately learn the patterns of interest for this population.

Add diversity to training models and data teams

Clearly, we need to train AI systems with more robust data representing a wider range of patients. We also need to ask the right questions of the data and think carefully about how we formulate the problems we are trying to solve. During a panel I moderated at the Women in Data Science (WiDS) annual conference in March, Dr. Jinoos Yazdany of Zuckerberg San Francisco General Hospital exemplifies why framing is important: Without the right context, an AI could come to illogical conclusions, such as deducing that a visit from the hospital chaplain contributed to a patient’s death (while it was actually the other way around – the chaplain came because the patient was dying).

To understand complex healthcare issues and make sure we ask the right questions, we need interdisciplinary teams that combine data scientists with medical experts, as well as ethicists and social scientists. During the WiDS panel, my Stanford colleague, Dr. Sylvia Plevritis, explained why her lab is half cancer researchers and half data scientists. “Ultimately,” she said, “you want to answer a biomedical question or solve a biomedical problem.” We need multiple forms of expertise working together to build powerful tools that can identify skin cancer or predict whether a patient will end up in the hospital.

We also need diversity in research teams and in healthcare leadership to look at problems from different angles and bring innovative solutions to the table. Let’s say we build an AI model to predict which patients are most likely to miss appointments. The working moms on the team may turn the question upside down, asking instead what factors are likely to keep people from making an appointment, such as scheduling a session in the middle of after-school pick-up time.

Healthcare providers are needed in AI development

The final piece of the puzzle is how AI systems are put into practice. Healthcare leaders need to be critical consumers of these flashy new technologies and ask themselves how AI will work for all patients entrusted to their care. AI tools need to fit into existing workflows so providers will actually use them (and continue to add data to the models to make them more accurate). Involving healthcare providers and patients in the development of AI tools leads to end products that are much more likely to be successfully deployed and have an impact on care and patient outcomes.

Making AI-powered tools work for everyone shouldn’t just be a priority for marginalized groups. Bad data and inaccurate models hurt us all. During our WiDS panel, Dr. Yazdany discussed an AI program she developed to predict outcomes for patients with rheumatoid arthritis. The model was originally created using data from a more prosperous research and teaching hospital. When they added data from a local hospital serving a more diverse patient population, it not only improved the AI’s predictions for marginalized patients, but also made the results more accurate for everyone, including patients at the original hospital.

AI will revolutionize medicine by predicting health problems before they arise and identifying the best treatments adapted to our individual needs. It is essential that we now lay the right foundations to ensure that AI-driven healthcare works for everyone.

dr. Tina Hernandez Boussard is an associate professor at Stanford University who works in biomedical informatics and the use of AI technology in healthcare. Many of the perspectives in this article came from her panel during the Women in data science (WiDS) annual conference.

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers