The US Army works on an Artificial Intelligence to decide who receives medical help in combat

by time news

R. A.

Madrid

Updated:

Keep

The Defense Advanced Research Projects Agency (DARPA), in charge of the innovation projects of the United States Army, has announced the development of a Artificial intelligence to help decide which combat-wounded soldiers should receive medical care first and help make other decisions “in stressful situations” for which “there is no agreed-upon right answer.” Situations in which, in addition, human criteria can fail due to the existence of biases.

The project is called ‘In the moment‘ (‘At the moment’, in Spanish or ITM, for its acronym in English). According to the show’s details, substituting data and algorithms for human bias in combat situations can “help save lives.”

The program, however, is in its infancy. It is expected to be progressively developed over the next three and a half years.

Once it’s finished ITDARPA’s plan is to be able to help decision-making in two specific situations: in times when small units are injured, and in times when an attack causes mass casualties. The AI ​​will also be trained according to the decisions of triage experts. It is also expected to develop algorithms that help make decisions in disaster situations, such as earthquakes, according to army officials told ‘The Washington Post.

Initially, however, the goal is for the system to allow, for example, identify all the resources that nearby hospitals have and the availability of medical staff to make the right decisions. “Computer algorithms can find solutions that humans can’t,” says Matt Turek, ITM program manager, speaking to the US media.

Artificial Intelligence has been gaining importance in the military world for decades. It is also one of the main concerns of experts in technological ethics. And it is that a machine, no matter how well trained it is, is always susceptible to failures. This was made clear by several experts consulted by ABC a few months ago regarding the development of autonomous weapons, in which the AI ​​is capable of attacking human targets fully independently.

“Not only is it possible for AI to fail, it is also possible to make it fail,” Juan Ignacio Rouyet, an expert in AI and ethics and professor at UNIR, explained in a conversation with this newspaper.

See them
comments

You may also like

Leave a Comment