Musk and dozens of high-ranking people in the technological world call to stop the development of artificial intelligence: “a danger to humanity”
The signatories of the letter said caution must be exercised in the development of AI, which is responsible for a “profound change in history on Earth”, while claiming that the situation is getting out of control
Some of the biggest names in technology are calling for an immediate halt to the training of artificial intelligence systems for at least six months, citing “profound risks to society and humanity as a whole.”
Elon Musk is among dozens of senior executives in the technological world, as well as professors and researchers, who signed the letter published by the ‘Future of Life Institute’ – an association supported by Musk.
The letter came two weeks after OpenAI announced ChatGPT-4, a powerful new version update of the chatbot based on artificial intelligence.
According to the signatories of the letter, the stop should apply to artificial intelligence systems “more powerful than GPT-4”, to take time for the development and deployment of a common set of protocols that would make the powerful tool safe “beyond reasonable doubt”.
“Advanced artificial intelligence could represent a profound change in the history of life on Earth, and it must be planned and managed carefully and with appropriate resources,” it said. “Unfortunately, this level of planning and management does not exist on the ground, although recent months have seen an out-of-control development and deployment of more powerful digital minds that no one, not even their creators, can reliably understand, predict or control.”
If there isn’t a proactive pause soon, the executives wrote, governments should step in.
The development of artificial intelligence has become a veritable arms race between technology companies to deploy AI tools in their products. The leading companies are OpenAI, Microsoft and Google, as well as Amazon, IBM, and more. This is alongside a long list of startups.
AI experts are increasingly concerned about the great potential of the advanced technological tool, such as the ease of spreading misinformation and the impact on human privacy.
Just recently, the CEO of OpenAI, Sam Altman, said that he himself is “a little afraid” of the technological development, since it can be so powerful that it can become dangerous.
“I think people should be happy that we’re a little afraid of it,” said Altman, 37, a Jewish-American entrepreneur. “It’s going to eliminate a lot of existing jobs.”
? Did the article interest you?