This Wednesday, an important group of experts in artificial intelligence and executives from the sector, among them Elon Musk, called for a six-month pause in research on artificial intelligences (AI) more powerful than ChatGPT-4, the OpenAI model released this month, warning of “great risks to humanity.”
Find out: ChatGPT-4: these are the news of the latest version of this AI
In the petition posted on the futureoflife.org site, call for a moratorium until security systems are established with new regulatory authoritiessurveillance of AI systems, techniques that help distinguish between the real and the artificial, and institutions capable of coping with the “dramatic economic and political disruption (especially for democracy) that AI will cause.”
It is signed by personalities who have expressed fears about an uncontrollable AI surpassing humans, including Musk, owner of Twitter and founder of SpaceX and Tesla, and historian Yuval Noah Hariri.
Also read: Google opened access for Bard, the direct rival of ChatGPT
The director of Open AI, who designed ChatGPT, Sam Altman, has admitted to being “a little afraid” of its creation being used for “large-scale disinformation or cyberattacks.” And recently, Altman stated on ABCNews: “The company needs time to adapt.”
“Over the past few months we have seen AI labs launch into a headlong race to develop and deploy increasingly powerful digital brains than anyone else, not even its creators can reliably understand, predict, or control”, say the experts.
“Should we allow the machines to flood our information channels with propaganda and lies? Should we automate all jobs, including rewarding ones? (…) Should we risk losing control of our civilization? These decisions should not be delegated to unelected technology leaders,” they concluded.
Signatories also include Apple co-founder Steve Wozniak, members of Google’s DeepMind AI lab, Stability AI director Emad Mostaque, as well as American AI experts and academics and executive engineers from OpenAI partner Microsoft.