We have been terrified of artificial intelligence for thousands of years

by time news

2023-11-19 20:05:13

In the last decade, artificial intelligence (abbreviated AI in English, MI in Hungarian) has become the most popular technology since the wide spread of the Internet. Its development is also fueled by huge investments from tech giants, spending billions of dollars in the hope of getting ahead of their competitors – or even more, in their fear that someone will overtake them in an area that could completely transform our world in the future. The fate of not billions of dollars, but rather trillions of dollars, depends on what kind of fruits these developments will bring. In any case, the development has recently accelerated to such an extent that some experts are already saying that a general artificial intelligence that is “intelligent” even in the everyday sense of the word may soon be created.

Not a new idea

In this heightened mood, it is easy to forget that AI is not a new field at all, what’s more, the idea of ​​artificially created intelligence actually has roots going back to ancient times and has appeared in some form throughout the entire written history of mankind. In Greek mythology, the blacksmith of the gods, Hephaestus, creates an automaton made of metal to serve the Olympian gods. An element of Jewish folklore is the story of a golem kneaded out of mud, which (or who) was brought to life by ancient rituals. Mary Shelley His 1818 novel, the cornerstone of science fiction literature, a Frankenstein it is entirely about the creation of artificial life. Fritz Lang his genre-creating film, the 1927 Metropolis inspired a whole series of horror and fantasy works with the figure of the Maschinenmensch, i.e. the robot in human form, bringing murderous chaos.

Universal / Collection ChristopheL / AFP A scene from the 1931 film Frankenstein directed by James Whale.

Nevertheless, the actual implementation of AI belonged to the realm of science fiction until the appearance of the first digital computers after the end of World War II. The key character of the story is the brilliant British mathematician Alan Turingwho is best known for his role in breaking the Enigma cipher with the Bletchley Park team of codebreakers, thought to be unbreakable.

In 1948, Turing got a job at the University of Manchester to work on Britain’s first computer, the ‘Manchester Baby’. The advent of computers increased interest in “electronic brains,” which were apparently capable of impressive intellectual feats. Turing was increasingly annoyed by the dogmatic debates about the fact that intelligent machines cannot exist, and he wanted to put an end to the debate with an article published in the journal MIND in 1950. He proposed a method—which he called the “imitation game” but now known as the Turing test—to decide whether a machine can be considered intelligent. A human interviewer engages in a conversation with a machine or a human in electronic form, so he cannot know which one he is dealing with. Turing argued that

if a machine cannot be reliably distinguished from a human in such a test, then that machine is considered intelligent.

Pictures From History / Universal Images Group / Getty Images Alan Turing

At the same time across the ocean John McCarthy an American scientist also began to be concerned with the possibility of intelligent machines. In 1955, collecting grants for a scientific conference the following year, he “invented” the term “artificial intelligence”. McCarthy had high hopes for this event: he believed that by bringing together researchers from all relevant disciplines, artificial intelligence could be created in weeks. Not much progress was made at the conference, but the scientists brought together by McCarthy created a new field of science, and today’s AI developers can be considered his scientific descendants.

Machine learning

At the end of the 1950s, there were only a few digital computers in the world. Nevertheless, McCarthy and his colleagues had already created programs that could learn, solve problems, solve logic puzzles, and play games. It was assumed that progress would be rapid, especially given that computers were rapidly becoming cheaper and faster. But momentum waned, and by the 1970s research funders were becoming frustrated by overly optimistic predictions about the pace of development. Grants were cut and MI was given a stigma. In the 1980s, waves of excited optimism once again swept the field in the wake of new approaches, but development again hit a wall – and AI researchers were once again accused of inflating expectations of a breakthrough.

The situation began to change in the 21st century, with the development of a new generation of AI systems capable of deep learning and based on artificial neural networks – but at the level of thought, there is nothing new in this either. The animal and human nervous system consists of a huge mass of interconnected neurons. The human brain, for example, contains tens of billions of neurons, each with an average of roughly 7,000 connections to the others. Each neuron recognizes the simple signal patterns coming from its connection network and sends electrochemical signals based on this. Human intelligence is somehow born out of these simple connections and interactions. In the 1940s Warren McCulloch and Walter Pitts American researchers recognized the possibility of modeling such a system with electronic circuits – thus the field of research and design of artificial neural networks was born.

Although scientists were constantly working on these following the ideas of McCulloch and Pitts, further progress was needed in order for such systems to be realized. One of the necessary breakthroughs Geoffrey Hinton and delivered by his colleagues in the 1980s. Their work led to a sudden resurgence of interest in the field, but the excitement quickly died down when it became clear that the computing technology of the day could not create sufficiently powerful neural networks. However, the new century has come and the situation has changed: today we live in the age of huge, cheap computing and data storage capacities, which enable the operation of deep learning networks that are behind the development of AI today.

Neural networks are also to be found in the artificial intelligence background of the application that received the most attention in the recent period, ChatGPT, which was made available by OpenAI in November 2022. ChatGPT and the neural networks working in the background, each consisting of a trillion units, immediately gained enormous popularity, and today hundreds of millions use their services every day. One of the secrets of its success may be that it feels exactly like WE, which we could only see in cinemas before.

It will not crawl out of the computer

Using ChatGPT is like simply having a conversation with someone who seems smart and knowledgeable. However, what your neural nets do is actually quite simple. When we type something “for” him, ChatGPT simply tries to figure out what text it should display. To do this, it uses a huge amount of data (including pretty much all the textual content ever published on the Internet). Extensive neural networks and massive amounts of data allow the program to pass the Turing test in all practical terms.

Stanislav Kogiku / APA-PictureDesk / APA-PictureDesk / AFP

The success of ChatGPT has also fueled fundamental fears: what if we create something that we cannot control? It’s the nightmare of Frankenstein, Metropolis or Terminator. Based on ChatGPT’s truly impressive capabilities, it seems easy to believe that this is a real and close possibility, but despite all its knowledge, one should not imagine too much real intelligence behind ChatGPT’s answers. It’s not about a mind – the program only displays texts that seem relevant. He doesn’t think about why we ask him for pancake recipes or the latest results of our favorite football team – in fact, he doesn’t think about anything. It has no beliefs or desires, and no purpose other than to display selected texts according to certain criteria.

You’re not going to crawl out of your computer and take over the world.

Of course, this does not mean that there are no dangers in the use of AI. One of the most current ones right now is that ChatGPT and similar applications can be used to produce industrial amounts of disinformation, for example by those who want to influence the results of an election. We also don’t know the extent to which these systems pick up our own human biases and mistakes from the data they used to train them. The program actually does nothing but try to figure out what we would write in response to the given question – so the large-scale application of these technologies can actually hold a huge mirror up to humanity. And we’re not sure we’ll like what we see in it.

The article Daniel Lithuania translated.

#terrified #artificial #intelligence #thousands #years

You may also like

Leave a Comment