even predictive models can be wrong – time.news

by time news

2023-10-14 13:00:56

by Ruggiero Corcella

An American study has highlighted the “flaws” of artificial intelligence. Strict and constant maintenance of these tools is required (such as the overhaul of a car)

More and more AI tools are being used to predict everything from sepsis to strokes, with the hope of speeding the discovery of life-saving treatments. But, as happens with humans, AI can also make mistakes.

«The use of Artificial Intelligence (AI) has entered daily clinical practice, and there is no doubt that it has brought numerous advantages, especially in precise and defined fields. A clear example is the world of diagnostic imaging, where AI is really helping the radiologist in interpreting (digital) images for a correct and increasingly faster diagnosis. In other clinical fields we are still working to understand the best methods of application, such as for risk predictive models for the most serious patients, such as those hospitalized in intensive care” explains Elena Giovanna Bignami, full professor of Anesthesia and Resuscitation at University of Parma and artificial intelligence expert of the Italian Society of Anesthesia, Analgesia, Resuscitation and Intensive Care (Siaarti).

However, over time, machine learning-based models in healthcare may become victims of their own success, according to researchers at the Icahn School of Medicine and the University of Michigan. In one study, the team evaluated the impact of implementing predictive models on the subsequent performance of these and other models. The findings – that using the models to adapt how care is delivered can alter the underlying assumptions on which the models were “trained,” often for the worse – were published in Annals of Internal Medicine.

The models put to the test in the hospital

“We wanted to explore what happens when a machine learning model is implemented in a hospital and allowed to influence doctors’ decisions for the overall benefit of patients,” says first author Akhil Vaid, clinical instructor of Digital and Data-Driven Medicine ( D3M), part of the Icahn Mount Sinai Department of Medicine. Vaid and he simulated the implementation of two models that predicted the risk of death and acute kidney injury for a patient within five days of admission to intensive care.

Their simulations assumed that the models did what they were supposed to do: reduce deaths and kidney injuries by identifying patients for early intervention. But as patients begin to get better, the models become much less accurate at predicting the likelihood of acute kidney failure and all-cause mortality. And retraining the models and other methods to stop the decay didn’t help. « Artificial Intelligence models possess the ability to learn and establish correlations between incoming patient data and corresponding results, but the use of these models, by definition, can alter these relationships. Problems arise when these altered relationships are reported in medical records,” says Vaid.

A vicious circle

The new study identified a problem: Successful predictive models create a “loop.” As AI helps guide interventions to keep patients healthier, electronic health records within a system can begin to reflect lower rates of kidney damage or mortality – the same data to which other predictive models are applied and which are used to retrain models over time.

The study simulated intensive care scenarios at two major healthcare institutions, the Mount Sinai Health System in New York and the Beth Israel Deaconess Medical Center in Boston, analyzing 130,000 intensive care admissions. The researchers demonstrated how the problem emerges in three scenarios, each commonly implemented by healthcare systems using AI today.

The real problem: the quality of the data

«The conclusions are not always easy to understand, and seem discouraging – says Professor Bignami -, because the predictive model seems to give worse results, without helping clinicians to make decisions for a real benefit to patients. Vaid and colleagues saw that the ability of the models to be truly predictive is linked to the “data quality” of the measures reported in the medical record. Furthermore, the simultaneous application of multiple “general” prediction tools, i.e. not targeted for a specific disease condition (such as sepsis) can lead the doctor to have too much information, sometimes false alarms and intervention times that are not perfectly adequate”.

Careful maintenance of the Ai is required

Did you fail without the possibility of recovery? “We should not consider predictive models to be unreliable,” says Professor Girish Nadkarni, director of the Charles Bronfman Institute of Personalized Medicine, and head of the Digital and Data-Driven Medicine System at Icahn Mount Sinai.

“Instead, it’s about recognizing that these tools require regular maintenance, understanding and contextualization. Neglecting to monitor their performance and impact can compromise their effectiveness. We need to use predictive models thoughtfully, just like any other medical tool. Learning health systems must pay attention to the fact that indiscriminate use and updates of such models will cause false alarms, unnecessary testing and increased costs.”

“We recommend that health systems promptly implement a system to track individuals affected by machine learning predictions and that relevant government agencies issue guidelines,” says Dr. Vaid. «These results are also applicable outside the healthcare context and extend to predictive models in general. Therefore, we live in a world where any model used naively can disrupt the function of current and future models and ultimately become useless.”

October 14, 2023 (modified October 14, 2023 | 08:37)

#predictive #models #wrong #time.news

You may also like

Leave a Comment