Entrepreneurs and researchers call for a break in development for AI

by time news

EA six-month break in development for the training of AI systems that are more powerful than GPT-4: That’s what almost 2,000 signers of an open letter that was published on Tuesday on the website of the “Future of Life Institute” are demanding – a non-profit organization that is committed to the responsible and low-risk use of transformative technologies. Advanced artificial intelligence could herald a sea change in the history of life on Earth, and that must be planned with due care, the letter said.

Sibylle Anderl

Editor in the feuilleton, responsible for the “Nature and Science” department.

The risks of propaganda and false information, the loss of jobs and a general loss of control are particularly worrying. The further development of powerful AI systems should therefore only take place when it is clear that these risks can be controlled. Among the signatories are well-known names such as Apple co-founder Steve Wozniak and Elon Musk – although the latter has not necessarily distinguished himself as an entrepreneur with particularly high moral standards.

“Concern that we won’t be able to keep up with regulation”

Also involved from Germany are Professor Ute Schmidt, who heads the Cognitive Systems working group at the Otto Friedrich University of Bamberg, and Silja Vöneky, head of the FRIAS Responsible AI research group at the Albert Ludwig University of Freiburg. Schmidt explained her participation to the German Science Media Center (SMC) with the need to point out the risks of using large language models and other current AI technologies. One must try to enter into a broad democratic discourse in which the AI ​​experts from research institutes and large tech companies actively participate.

On the other hand, as a professor of international law and legal ethics, Vöneky particularly emphasized the lack of a suitable legal framework: “My concern is that we won’t be able to keep up with regulation. The EU’s AI regulation is not yet in force – and only classifies these systems as low risk, so hardly regulates them.” The same applies to the Human Rights Council of Europe Convention on AI. There is no other binding international treaty on AI.

The EU AI regulation, which has been in place for two years, is currently being negotiated and could not be passed until this year at the earliest. At its core, it consists of a risk-based, three-tier regulatory framework that distinguishes between unacceptable-risk AI systems, high-risk AI systems, and low-risk AI systems. Chatbots like ChatGPT would fall under the latter category. Even if the regulation actually came into force in two years at the earliest, there would be no changes for the technologies criticized in the open letter. Vöneky criticizes this sluggishness: the regulation has so far been thought of too ‘statically’ and cannot “react quickly enough to new risk situations caused by new technical developments”.

A temporary freeze on research could, at least in theory, serve to give politicians and judiciary the opportunity to catch up on what has been neglected here. “A moratorium would have the advantage that regulations could be decided proactively before research progressed further,” Thilo Hagendorff, research group leader at the University of Stuttgart, told the SMC. At the same time, however, he sees the statement critically: “Ultimately, the moratorium serves precisely those institutions whose activities are actually to be problematized. It suggests completely exaggerated capabilities of AI systems and stylizes them as more powerful tools than they actually are.”

The moratorium thus fuels misunderstandings and misperceptions about AI and thus tends to distract from actual problems – or even exacerbate them. After all, exaggerated expectations and too much trust in the new, powerful language models are factors that further promote the lamented loss of control and the risk of disclosing intimate information or not adequately checking the information provided.

In any case, it remains completely unclear how a research freeze could be controlled and punished at all. This is already evident from the fact that the requirement to slow down more powerful systems than GPT-4 is not clearly defined: Due to the lack of transparency regarding the technical details and possibilities of this language model from OpenAI, it would be difficult to decide which models are affected. What’s more, stopping development also harbors risks. Thilo Hagendorff sees this illustrated by various scenarios: “If a query to a language model can provide better answers than human experts, then this makes the entire knowledge work more productive. In extreme cases, it can even save lives. Language models in medicine, for example, are a great opportunity to save more lives or reduce suffering.”

Meanwhile, Italy has already created facts. The Italian data protection authority took the alleged violations of data protection and youth protection rules as an opportunity to request the company OpenAI to stop the application in Italy. Nello Cristianini, professor of artificial intelligence at the University of Bath, interpreted this to the British SMC as confirmation that the open letter had made a valid point: “It is not clear how these decisions will be enforced. But the mere fact that there seems to be a mismatch between technological reality and the legal framework in Europe suggests there might be some truth to the letter signed by various AI entrepreneurs and researchers two days ago. “

You may also like

Leave a Comment