When the Covid-19 pandemic emerged last year, physician Lara Jehi and her colleagues at the Cleveland Clinic were running blind. Who was at risk? Who were the patients likely to get sicker? What kinds of care will they need?
“The questions were endless,” says Jehi, the clinic’s chief research information officer. “We didn’t have the luxury of time to wait and see what’s going to evolve over time.”
With answers urgently needed, the Cleveland Clinic turned to algorithms for help. The hospital assembled 17 of its specialists to define the data they needed to collect from electronic health records and used artificial intelligence to build a predictive treatment model. Within two weeks, the clinic created an algorithm based on data from 12,000 patients that used age, race, gender, socioeconomic status, vaccination history and current medications to predict whether someone would test positive for the novel coronavirus. Doctors used it early in the pandemic when tests were at a premium to advise patients whether they needed one.
Over the past year, the clinic published more than three dozen papers about using artificial intelligence. Jehi and her colleagues created models that identified those with the virus likely to need hospitalization which helped with capacity planning. They built another model that helped alert doctors to a patient’s risk for an intensive care unit and prioritized those at higher risk for aggressive treatment. And when patients were sent home and monitored there, the clinic’s software flagged which patients might need to return to the hospital.
Artificial intelligence had already been in use by hospitals, but the unknowns with Covid-19 and the volume cases created a frenzy of activity around the United States. Models sifted through data to help caregivers focus on patients most at-risk, sort threats to patient recovery and foresee spikes in facility needs for things like beds and ventilators. But with the speed also came questions about how to implement the new tools and whether the datasets used to build the models were sufficient and without bias.
At Mount Sinai Hospital in Manhattan, geneticist Ben Glicksberg and nephrologist Girish Nadkarni of the Hasso Plattner Institute for Digital Health and the Mount Sinai Clinical Intelligence Center, were asking the same questions as doctors at the Cleveland Clinic. “This was a completely new disease for which there was no playbook and there was no template,” Narkarni says. “We needed to aggregate data from different sources quickly to learn more about this.”
At Mount Sinai, with patients flooding the hospital during the spring epicenter of the outbreak in North America, researchers turned to data to assess patients’ risk for critical events at intervals of three, five and seven days after admission to anticipate their needs. Doctors decoded which patients were likely to return to the hospital and identified those who might be ready for discharge to free in-demand beds.
Nearly a year into looking to machine learning for help, Glicksberg and Narkani say it’s a tool, not an answer. Their work showed the models identified at-risk patients and uncovered underlying relationships in their health records that predicted outcomes. “We’re not saying we’ve cracked the code of using machine learning for Covid and can 100 percent reliably predict clinically-relevant events,” Glicksberg says.
“Machine learning is one part of the whole puzzle,” Nadkarni adds.
For Covid, artificial intelligence applications cover a broad range of issues from helping clinicians make treatment decisions to informing how resources are allocated. New York University’s Langone Health, for instance, created an artificial intelligence program to predict which patients can move to lower levels of care or recover at home to open up capacity.
Researchers at the University of Virginia Medical Center had been working on software to help doctors detect respiratory failure leading to intubation. When then pandemic hit, they adapted the software for Covid-19.
“It seemed to us when that all started happening, that this is what we had been working toward all these years. We didn’t anticipate a pandemic of this nature. But here it was,” says Randall Moorman, a professor of medicine with the university. “But it’s just the perfect application of the technology and an idea that we’ve been working on for a long time.”
The software, called CoMET, draws from a wide range of health measures including an EKG, laboratory test results and vital signs. It projects a comet shape onto a patient’s LCD screen that grows in size and changes color as their predicted risk increases, providing caregivers with a visual alarm, which stands out among the beeping alarms of a hospital unit. The software is in use at the University of Virginia hospital and is available to be licensed by other hospitals, Moorman says.
Jessica Keim-Malpass, Moorman’s research partner and a co-author of a paper about using predictive software in Covid treatment, says the focus was on making the model practical. “These algorithms have been proliferating, which is great, but there’s been far less attention placed on how to ethically use them,” she says. “Very few algorithms even make it to any kind of clinical setting.”
Translating what the software does into something easy for doctors, nurses and other caregivers to use is key. “Clinicians are bombarded with decisions every hour, sometimes every minute,” she says. “Sometimes they really are on the fence about what to do and oftentimes things might not be clinically apparent yet. So the point of the algorithm is to help the human make a better decision.”
While many models are in place in hospitals, there’s potential for more in the works. A number of applications have been developed, but have not yet rolled out. Researchers at the University of Minnesota have worked with Epic, the electronic health record vendor, to create an algorithm that assesses chest X-rays for Covid and takes seconds to find patterns associated with the virus. But it has not yet been approved by the Food and Drug Administration for use.
At Johns Hopkins University, biomedical engineers and heart specialists have developed an algorithm that warns doctors several hours before patients hospitalized with Covid-19 experience cardiac arrest or blood clots. In a preprint, researchers say it was trained and tested with data from more than 2,000 patients with the novel coronavirus. They are now developing the best way to set up the system in hospitals.
As hospitals look to integrate artificial intelligence into treatment protocols, some researchers worry the tools are being approved by the Food and Drug Administration before they have been deemed statistically valid. What requires FDA approval is fuzzy; models that require a health care worker to interpret the results don’t need to be cleared. Meanwhile, other researchers are also working to improve the software tools’ accuracy amid concerns they magnify racial and socioeconomic biases.
Researchers at the University of California in 2019 reported that an algorithm hospitals used to identify high-risk patients for medical attention showed that black patients with the same risk “score” were significantly sicker than white patients because of the data used to create the model. Because the pandemic disproportionately affects minorities, creating prediction models that do not account for their health disparities threatens to incorrectly assess their risk, for instance.
An August article in the Journal of the American Medical Informatics Association, researchers from Stanford University wrote that small data samples were not representative of overall patient populations and were biased against minorities. “There is hope that A.I. can help guide treatment decisions within this crisis; yet given the pervasiveness of biases, a failure to proactively develop comprehensive mitigation strategies during the COVID-19 pandemic risks exacerbating existing health disparities,” wrote the authors, including Tina Hernandez-Boussard, a professor at the Stanford University School of Medicine
The authors expressed concern that over-reliance on artificial intelligence—which appears objective, but is not—is being used for allocation of resources like ventilators and intensive care beds. ”These tools are built from biased data reflecting biased healthcare systems and are thus themselves also at high risk of bias—even if explicitly excluding sensitive attributes such as race or gender,” they added.
Glicksberg and Nadkarni, of Mount Sinai, acknowledge the importance of the bias issue. Their models drew from the Manhattan location with a diverse patient population from the Upper East Side and Harlem, but then were validated using information from other Mount Sinai hospitals in Queens and Brooklyn, hospitals with different patient populations that were used to make the models more robust. But the doctors acknowledge some underlying issues are not part of their data. “Social determinants of health, such as socioeconomic status, play an enormous role in almost everything health-related and these are not accurately captured or available in our data,” Glicksberg says. ”There is much more work to be done to determine how these models can be fairly and robustly embedded into practice without disrupting the system.”
Their most recent model predicts how Covid-19 patients will fare by examining electronic health records across multiple servers from five hospitals while protecting patient privacy. They found that model was more robust and a better predictor than those based on the individual hospitals. Since limited Covid-19 data is segregated across many institutions, the doctors called the new model “invaluable” in helping predict a patient’s outcome.
Jehi says the Cleveland Clinic database now has more than 160,000 patients with more than 400 data points per patient to validate its models. But the virus is mutating and the algorithms need to continue to chase the best possible treatment models.
“The issue isn’t that there isn’t enough data,” Jehi says. “The issue is that data has to be continuously reanalyzed and updated and revisited with these models for them to maintain their clinical value.”
Article: How Doctors Are Using Artificial Intelligence to Battle Covid-19