A world of “killer robots” and the need to regulate them is anticipated

by time news

2023-12-05 12:42:30

Armed drones – NARA.GETARCHIVE.NET

MADRID, 5 Dic. (EUROPA PRESS) –

The possibility that Lethal autonomous weapons systems will soon be deployed on battlefields presents an urgent need for global action to regulate these technologies.

It is the conclusion of a new book titled “The Military AI Race: Common Good Governance in the Age of Artificial Intelligence”, written by Denise García, professor of political science and international affairs at Northeastern University, who served on the International Panel for the Regulation of Autonomous Weapons from 2017 to 2022.

As artificial intelligence advances, weapons of war become increasingly capable of killing people without significant human supervision, raising troubling questions about the way the wars of today and tomorrow will be conducted, and how autonomous weapons systems could weaken accountability when it comes to potential violations of international law that accompany their deployment.

In her book, Denise García condenses these grim realities and explores the challenges of “creating a global governance framework” that anticipates a world of rampant AI weapons systems in the context of deteriorating international law and norms. Thus, she highlights that military applications of AI have already been implemented in the ongoing conflicts in Europe and the Middle East, One of the most famous examples of this is the Iron Dome in Israel.

“The world must come together and create new global public goods, which I would say must include a framework to govern AI, but also commonly agreed upon rules on the use of AI in the military“says Garcia it’s a statement from his university.

This expert warns that accelerating militarized AI as such is not the right approach and risks adding more volatility to an already very unstable international system. “Simply put, AI should not be trusted to make decisions about war,” dice.

Some 4,500 AI and robotics researchers have collectively said that AI should not make decisions regarding the killing of human beings, a position, Garcia notes, that aligns with European Parliament guidelines and European Union regulation. However, US officials have pushed for a regulatory paradigm of rigorous testing and design so that humans can use AI technology “to make the decision to kill.”

“This looks good on paper, but it is very difficult to achieve in reality, since it is It is unlikely that algorithms can assimilate the enormous complexity of what happens in war“says Garcia.

AI weapons systems not only threaten to disrupt standards of accountability under international law, but also make the prosecution of war crimes much more difficult due to problems associated with attribution. “combatant status” to military AI technology, says García.

“International law (and laws in general) have evolved to focus on human beings,” he says. “When you insert a robot or software into the equation, who will be responsible?”

And he continues: “The difficulties of attributing responsibility will accelerate the dehumanization of war. “When humans are reduced to data, human dignity will dissipate.”

Existing military applications of AI and quasi-AI have already made waves in defense circles. One such application allows a single person to control multiple unmanned systems, according to one source, such as a swarm of drones capable of attacking from the air or under the sea. In the Ukraine war, loitering munitions (unmanned aircraft that use sensors to identify targets, or “killer drones”) have sparked debate about exactly how much control human agents have over goal decisions.

#world #killer #robots #regulate #anticipated

You may also like

Leave a Comment