They ask for a pause in the development of AI more powerful than GPT-4 due to risks to society – 2024-03-12 13:20:26

by times news cr

2024-03-12 13:20:26

Text: Hugo León

An open letter signed by more than 1,100 personalities from the fields of business, science and technology, including experts in artificial intelligence (AI), urges a pause of at least six months in the development of more powerful systems than the recently created one. OpenAIs GPT-4, citing potential risks to humanity and society.

Just to cite a few examples of the caliber of some names behind the text, it is worth mentioning Elon Musk, founder and owner of SpaceX, CEO of Twitter and Tesla, etc.; Steve Wozniak, co-founder of Apple; Evan Sharp, co-founder of Pinterest; Jaan Tallinn, co-founder of Skype; professors, researchers and founders of AI development groups; entrepreneurs, Meta engineers, among others.

It is striking that so far no one from OpenAI, the team behind the GPT-4 language model, has signed this letter yet. Nor has anyone from Anthropic, a group that claims to have separated from OpenAI precisely to build a more secure AI chatbot.

However, this is an open letter, so specialists from those two groups could sign it at some point.

The text cites research recognized by the main laboratories behind the creation of artificial intelligence that demonstrates that such systems with intelligence competitive with human intelligence can pose profound risks and represent a profound change in the history of life on Earth.

The letter assures that in this sense the corresponding resources must be planned and managed.

“Unfortunately, this level of planning and management is not happening, even though in recent months AI labs have entered into an out-of-control race to develop and deploy increasingly powerful digital minds.”

Why pause the development of AI?

The letter indicates that not even their creators can reliably understand, predict or control AIs and reminds that these systems are becoming competitive with humans in general tasks.

This letter was issued by the non-profit Future of Life Institute and has attracted attention from around the world, while millions of people across the planet are seduced by the possibilities of artificial intelligence.

But those who sign the letter make it clear that the fears of millions of others who view AI with suspicion are not just fantasies: “powerful AI systems should be developed only when we are sure that their effect will be positive and their risks will be manageable.” .

Regarding the pause, it is requested that it be taken advantage of to develop and implement sets of shared security protocols for the design and development of advanced AI, which are supervised by independent experts and which guarantee that AI systems are secure beyond any doubt. reasonable.

The serious questions the letter asks

The text is inquisitive and somewhat forward-looking regarding what can happen if the development of AI is not regulated in a certain way.

For example, ask several questions like: “Should we let machines flood our information channels with propaganda and falsehood?” or “Should we automate all jobs, including compliance?”

He also questions whether “we should develop non-human minds that could eventually outnumber us, outsmart us, render us obsolete, and replace us.”

Last but not least, he asks “Should we risk control of our civilization?”

You may also like

Leave a Comment