Elon Musk, Steve Wozniak and Yuval Noah Harari read: Stop the AI ​​developments

by time news

Source: Pixabay

It seems that in recent months not a day goes by without news about developments in the field of AI. From ChatGPT, GPT-4 to Midgerni, Bing, Bard, Dali, Midgerni and other modules and services – all are in an impressive arms race, but one that worries quite a few experts around the world and key figures in the world of technology. Now, they are issuing a widespread call to suspend the developments in the field.

6 month break – now

“Stop the giant AI experiments: an open letter”, this is the title of a letter which has currently been signed by more than 1,125 men and women, including particularly interesting figures such as Professor Yuval Noah Harari; Steve Wozniak, co-founder of Apple; Elon Musk and other researchers and CEOs of AI companies (however keep in mind that at some point someone tricked and added Sam Altman, CEO of OpenAI to the list as well, so pay attention).

The open letter states that AI systems with intelligence capabilities that compete with humans pose real risks to society and humanity, and cites several studies on the subject. “Unfortunately, this level of planning and management (of significant AI developments) is not happening, even though for months we have seen AI labs locked in an out-of-control race to develop and release powerful digital entities that no one – including their creators – can reliably understand, predict or control.”

The letter claims that such developments, which are able to replace jobs, control information, propaganda and also defeat humanity in a certain sense – should not be under the control of CEOs of technology companies who were not elected by the public. “Powerful AI systems should only be developed when we are sure that their impact will be positive and the risks will be manageable”.

Therefore, the letter calls on all AI companies and departments to stop the training of AI systems that are more advanced than GPT-4 for a period of at least six months. “This stop should be public and provable, and include all the main players.” And since there is almost no chance of such a thing happening, the letter also says that “if such a stop cannot be carried out quickly, the governments should intervene.” In a nice play on words, the letter ends with a call for a “Summer of AI” in which we will reap the fruits of the systems, design the systems so that they are positive, give society a chance to adapt and enjoy that summer, so that we do not reach “Fall”.

You may also like

Leave a Comment