Will we have “ethical” robots (even in healthcare)? A virtual research center is working on this goal – time.news

by time news
from Ruggiero Corcella

A European project, in which Italy participates, aims to create solutions that make artificial intelligence not only useful and effective but above all reliable and safe

When we talk about artificial intelligence, it becomes crucial to understand how the technological devices equipped with it will “behave”. The terms of the question are stated very clearly by Paul Benanti in the essay «Human in the loop. Human decisions and artificial intelligences» (Mondadori editore): «For AI (Artificial Intelligence, Artificial Intelligence or Ia in Italian ndr) no form of automatic or implicit ethics is conceivable. It is unthinkable to make ethics emerge from data. Here’s why then the problem is not to substitute ethics in algorithmic ways» writes Benanti.

“Instead we have to start developing a new chapter of ethics: algorithmics. In the first instance, it is a matter of always leaving a space for man and his world of values ​​with which to judge: this act can sometimes be precarious and uncertain but irreplaceable and not replaceable by the machine», adds the author.

The project

Yeah, but how it is possible to teach AI to act according to the founding principles of being human i.e. ethically? The «European Lighthouse on Secure and Safe AI» (ELSA) project tries to give an answer to this very question. The initiative was funded by the EU with a three-year budget of approximately 7.5 million euros and another 2.5 million euros by the United Kingdom and Switzerland. To participate, is one network of researchers of excellence which come from 26 leading research institutes led by the Cispa Helmholtz Center for Information Security in Saarbrücken (Germany) and companies that combine their expertise in the field of AI and machine learning. “AI has the potential to vastly improve all of our lives, both through better healthcare and entirely new mobility models. But a blessing can quickly turn into a curse if the technology doesn’t have a secure footing,” he says Mario Fritzteacher of the Cispa.

The Italian “team”.

The goal of the project is precisely to achieve a virtual center of excellence to promote the development and deployment of cutting-edge Artificial Intelligence solutions and transform Europe into the global beacon of trusted and secure AI. In Elsa, our country is represented by University of Modena and Reggio Emilia, Cagliari, Genoa, Milan and the Turin Polytechnic. «We would like to be able to build a new generation of intelligent tools that are able, in some way, to optimize not only technical but also human metrics», he explains Luca Oneto which coordinates a team of researchers from the Department of Computer Science, Bioengineering, Robotics and Systems Engineering – Dibris together with Fabio Roli and Davide Anguita, all professors of Information Processing Systems at UniGe. What does it mean? «These machines must not only comply with technical operating requirements, but also preserve fundamental human rights. It is therefore necessary that these tools demonstrate that they also have ethical-human properties», underlines the expert.

Mitigate threats and damage

In short, the network of excellence should be able to timely detect potential threats and mitigate damage resulting from the use of AI. A fundamental task because the areas of application of artificial intelligence solutions are extremely delicate: just think of the health sector
. The three-year project will focus on developing robust technical approaches that are compatible with legal and ethical principles. «We are inspired by the legislation that is developing at European level, therefore the Gdpr and the Data Actfor data protection and distribution, and theYou have Act for the regulation of artificial intelligence. And this is the legal part. Then there is an ethical part, where it is necessary to have a multidisciplinary approach».

Translation into reality

How will all this translate into practice, especially in the light of continuous technological innovation? «The methodologies that are being developed are dedicated to two main purposes. The first is cbuild a new generation of intelligent machines able to satisfy legal and ethical constraints by design, i.e. already at the time of their design. The second purpose is to develop methods that allow existing tools to be endowed with these new properties», adds the expert. AI is also being used in robotics, cyber security, media and document security – all areas of interest for the new virtual center which will focus on further developing machine learning methods. One example is deep learning, which currently forms the basis for most modern AI applications.

Break down barriers

At the same time, the European Elsa network aims to create the necessary structures to promote the development and use of Artificial Intelligence technology and to break down barriers. “Obviously we won’t be able to solve all the questions we’ve been asking ourselves, within three years. What we have been trying to do for some time now is to pursue the continuous development of artificial intelligence, trying to understand what the problems are in order to develop countermeasures. This is a continuous process that will last for many years if not decades,” adds Oneto.

«The difference with respect to the past is that these issues were addressed individually by small working groups scattered throughout Europe and the world. Instead, the idea of ​​this laboratory is to systematize skills, make people understand the importance of these issues and be able to attract the attention of all stakeholders, users and those who are influenced by artificial intelligence on this topic . So the greatest result we can get will be that of raise awareness more effectively and above all being able to design tools that are then adopted on a large scale», he concludes.

December 5, 2022 (change December 5, 2022 | 16:48)

You may also like

Leave a Comment