Ethical tensions around the use of artificial intelligence in medicine

by time news

Identifying precancerous lesions that escape the human eye on an x-ray, allowing doctors to continuously monitor biological indicators of patients at home, preventing the risk of suicide by analyzing exchanges on Facebook… In the field of health, and in particular of medical diagnosis, artificial intelligence (AI) is opening up “Major Prospects”. The National Consultative Ethics Committee (CCNE) and the National Pilot Committee for Digital Ethics (CNPEN) readily acknowledge these contributions in a joint opinion adopted at the end of November 2022 and made public on Tuesday January 10.

But SIADMs (artificial intelligence systems for medical diagnosis) must remain “human decision support”without falling into a “substitution logic”, we read in this document. About sixty pages long, it was drawn up by a working group of 17 members set up after a ministerial referral dating from the summer of 2019.

The need for human control

The co-rapporteur of the text for the CNPEN, David Gruson, says he is attached to the notion of “positive regulation”. “Placing a blocking position vis-à-vis AI would run major risks for patients, and would encourage the import of programs from countries where they are less well supervised.says this member of the Sciences Po Paris health chair. We have chosen the path of openness, provided that it is controlled and regulated. »

Because the algorithm, if it can reassure by its rigorous and automatic functioning, sometimes makes errors: both false negatives (lesions or anomalies escape it) and false positives (it identifies lesions which are not in reality). the “human control at all stages of care” is one of the key recommendations of this opinion, which has 16 in all, as well as seven points of vigilance.

Vigilance is all the more necessary as these systems are currently beyond the evaluation of the High Authority for Health (HAS). Indeed, they are not intended to be sold to patients and are therefore almost never reimbursed by Social Security; this is the condition for HAS to assess a medical device.

“The need to maintain human supervision over such digital tools has already been incorporated into several pieces of legislation”specifies David Gruson, citing the French bioethics law revised in 2021, but also the long-awaited draft European regulation which must be adopted in 2023 (Artificial intelligence Act).

“We risk becoming very dependent”

“We can keep control over an AI when it does what humans already know how to do, but it’s much more difficult when it performs actions that we are not capable of”, notes Jean-Emmanuel Bibault, radiation oncologist and AI researcher at Inserm (1). In particular, he mentions certain prediction tasks, or even the extremely fine analysis of scanners which would seem “normal” to doctors.

So that these doctors do not become mere “AI pilots” in a few decades, Jean-Emmanuel Bibault insists on the need to continue to teach medical students what these computer systems can now accomplish. “Otherwise, we risk becoming very dependent and no longer being able, one day, to see if an AI starts doing anything. » Continuation of teaching and research around “already established diagnostic methods” is also one of the 16 recommendations of CCNE and CNPEN.

In a context of scarcity of hospital resources, these two authorities refuse to consider technology as “a way to compensate for the deficient organization” of the health system. “The obstacles to access to care cannot be removed by digital tools alone, the appropriation of which by patients is unequal”, says the document. Capable of quickly accomplishing long or repetitive tasks, SIADMs could free up doctors’ time so that they can discuss with their patients or deal with more complex situations. But they must be seen as tools “complementary” – and no « substitutifs » – responses to the current shortcomings of the healthcare system.

———-

For digital ethics, a “pilot committee”

In December 2019, At the request of Prime Minister Édouard Philippe, the National Consultative Ethics Committee (CCNE) created the Digital Ethics Pilot Committee (CNPEN).

Composed of about thirty members, it is directed by Claude Kirchner, emeritus research director at Inria.

It has already issued four advisory opinions, relating to conversational agents or even autonomous vehicles. On artificial intelligence systems for medical diagnosis (SIADM), he worked hand in hand with the CCNE, since this subject is as much a matter of bioethics as of digital ethics.

Like the CCNE, the CNPEN can be seized by the President of the Republic, the presidents of the parliamentary assemblies, the members of the government. He can also self-seize.

You may also like

Leave a Comment