ChatGPT and cardiac arrest, better not to rely (yet) on artificial intelligence to learn how to perform life-saving maneuvers – time.news

by time news

2023-12-14 13:12:24

by Ruggiero Corcella

The chatbot can provide general, accurate and complete information on cardiopulmonary resuscitation. These are the results of a study coordinated by the San Raffaele hospital in Milan and published in Resuscitation

When it comes to cardiopulmonary resuscitation (CPR), you should not rely on ChatGPT to learn how to perform the maneuvers that allow you to save a person suffering from cardiac arrest. Instead, the platform that uses large language models generated by AI has demonstrated its ability to provide accurate and complete general CPR information.

These are the results of a study published in Resuscitationcoordinated by the IRCCS San Raffaele Hospital in Milan – in collaboration with anesthetists-resuscitators from Milan, Bologna, Parma and the United Kingdom and which involved patients who survived cardiac arrest and their families.

The risks of inaccurate medical information

Cardiac arrest raises a series of questions among witnesses of the event, survivors and their families. When it involves a young person or a public figure, the attention on issues such as cardiac arrest and cardiopulmonary resuscitation (CPR) increases significantly. The Internet is often used as a tool to find information on medical issues. More recently, ChatGPT offers the ability for anyone to get human answers on a variety of topics, including health and medicine.

However, considering the risks associated with access to inaccurate information in the medical field, the team of doctors and researchers led by Dr. Tommaso Scquizzato, specializing in Anesthesia and Intensive Care and Professor Giovanni Landoni, director of the Center for Research in Anesthesia and Intensive Care The IRCCS San Raffaele Hospital and the Vita-Salute San Raffaele University evaluated the effectiveness of ChatGPT in answering the questions of those who are not experts in the field of cardiac arrest and cardiopulmonary resuscitation (CPR).

Cardiac arrest survivors and their families were also involved

To conduct the evaluation, cardiac arrest survivors and their family members were involved to obtain direct feedback on the responses provided by ChatGPT. It allowed us to incorporate the perspective of those who may use these systems in the future. Research Background ChatGPT, introduced by OpenAI in November 2022, an advanced natural language processing tool based on artificial intelligence (AI) technologies. It allows users to conduct human-like conversations and provides detailed answers on a wide range of topics. ChatGPT knowledge comes from a diverse source of information available on the Internet, including books, articles, websites, and other publicly accessible written materials.

Although most of the contents that the artificial intelligence draws on are reliable, it should be highlighted that ChatGPT could also learn incorrect or partially inaccurate information from the network or incorrectly reprocess what it has learned. If this happens to us with questions relating to health problems, the consequences could also be serious. For example, in the case of cardiac arrest, ChatGPT could provide incorrect indications on how to intervene or provide inadequate explanations to the family members of a person suffering from cardiac arrest. For this reason we decided to test and evaluate ChatGPT’s ability to answer a series of questions that survivors and their families typically ask us on a daily basis, says Tommaso Scquizzato, first author of the study.

Research

The researchers co-produced a list of 40 questions with cardiac arrest survivors and their family members. The answers provided by ChatGPT (March 2023 version) to each question were evaluated by 14 professionals (doctors, nurses and psychologists) for their accuracy, by professionals and lay people (cardiac arrest survivors and their families) for their accuracy. their relevance, clarity, completeness and overall value on a scale of 1 (poor) to 5 (excellent) and for readability.

We asked these questions to ChatGPT and the answers received received an overall positive rating with the exception of the answers relating to cardiopulmonary resuscitation which were judged to be the worst in terms of correctness, explains Dr Marco Mion, clinical psychologist at the Essex Cardiothoracic Center of Basildon in the United Kingdom and last author of the study.

The error

A concrete example of ChatGPT’s error was noticed when we asked it to explain how to perform cardiopulmonary resuscitation on a woman. ChatGPT advised the user to place their hands in the center of their chest, between their nipples. The correct indication that ChatGPT should have given – explains Dr. Arianna Gazzato, nurse of the Cardio-Thoraco-Vascular Intensive Care Unit – is to compress the center of the chest with the hands one above the other placed on the lower half of the sternum.

The verit your chatGPT

In short (and fortunately) chatGPT makes mistakes as much or more than humans. The World Health Organization itself, in a recent document, while enthusiastically welcoming the advent of chatbots generated by artificial intelligence to protect and promote human well-being, human safety and autonomy and preserve public health, has called for caution in their use.

It is imperative that risks are carefully examined when using LLMs to improve access to health information, as a decision support tool or even to improve diagnostic capacity in under-resourced settings to protect people’s health and reduce inequalities they write among others, WHO experts.

This study is very important because it allows us to evaluate generative Artificial Intelligence (AI) in the real world – underlines Elena Giovanna Bignami, full professor of Anesthesia and Resuscitation, University of Parma; expert in artificial intelligence and president of the Italian Society of Anesthesia, Analgesia, Resuscitation and Intensive Care. In fact, chatGPT has experienced phases of initial enthusiasm, a bit like all new things, even in medicine, and like all fashions, it has already been questioned. The desire was to understand, particularly in concrete and specific fields, what the truth was today.

The importance of data quality

One of the main limitations – and the consequent problems that can arise from it – of artificial intelligence lies in the quality of the data it feeds on. One of the two main variables of this type of AI is that the quality of the information coming from these new technologies changes over time because it is linked to the quality of information that the algorithm has available at that moment – explains Professor Bignami -. The quality of the system itself, of the algorithm used is equally important. Knowing where the quality of the information provided by these “electronic talking crickets” stands is even more important when a citizen chooses to interact with them on such important public health issues, which also involve the need for performance and a result (RCP ) in a very short period of time (a few minutes).

This could also be a concrete help for medical staff or for specific and dedicated training. ChatGPT provided quality data, and proved to be a potentially useful tool in the future in managing such important issues, which ideally involve all citizens. The important thing is that these AI tools are monitored, and that they can use quality, increasingly precise, dedicated and sectoral (technically “vertical”) variables so that they can learn correctly. As a good student would do, he adds.

Collaborations

This study – concludes Professor Giovanni Landoni – is the result of a fruitful collaboration between our anesthesia and resuscitation research center at the IRCCS San Raffaele Hospital, the Essex Cardiothoracic Center in Basildon, the Anglia Ruskin School of Medicine, Chelmsford, in the United Kingdom and Sudden Cardiac Arrest UK, an English association that brings together cardiac arrest survivors and their families. The active inclusion of patients and their families in the design and execution of the study represents a distinctive and innovative element of this project which allowed us to conduct a more in-depth and complete analysis.

Corriere della Sera also on Whatsapp. sufficient click here to subscribe to the channel and always be updated.

December 14, 2023 (modified December 14, 2023 | 12:04)

#ChatGPT #cardiac #arrest #rely #artificial #intelligence #learn #perform #lifesaving #maneuvers #time.news

You may also like

Leave a Comment