What are the differences and similarities between human and artificial intelligence? And how do they work together optimally? In his book Bad, worse, worst science journalist Bennie Mols describes why we should not fear AI. A prepublication.
Suppose an AI system has the sole purpose of making as many paperclips as possible. If that’s the only goal, people are an obstacle. After all, people can decide to sabotage the system and then it would make fewer or even no paperclips. Moreover, it can easily use human bodies as a raw material to make even more paper clips. In short, the future this AI system will aspire to is one where humans are sacrificed for making paperclips.
The Swedish philosopher Nick Bostrom described this thought experiment in 2003. According to him, it illustrates that AI can pose an existential risk to humanity once it has become smart enough.
His thought experiment has sparked much discussion. Some consider the paper clip example completely unrealistic, others believe that we should take the scenario seriously, however unlikely, because the consequences can be so great.
The American inventor and futurist Ray Kurzweil estimated in his 2005 book The Singularity is Near on the basis of the ever-increasing computing power of computers that around 2045 humans will be completely outshone by artificially intelligent machines. He calls that magical moment the Singularity. In 2014, physicist Stephen Hawking said of AI: “The development of artificial intelligence could spell the end of humanity.”
Necessary giant steps
How realistic are these kinds of ideas? From a philosophical point of view, it’s important to remember that there’s no reason why AI couldn’t one day become super-intelligent across all cognitive domains. Our brain proves that intelligence can arise from purely material information processing. But there is a big difference between theory and practice. In theory, world peace could break out tomorrow. In theory, humans will be able to travel through the universe at 99 percent of the speed of light tomorrow. However, both are equally unlikely.
Let’s see what giant steps AI would have to make before it would threaten human existence. First of all, humans must succeed in building human-like AI. That moment is still far away. There are still countless cognitive skills in which humans are much better than machines and for which the field of AI does not yet have a solution. For example, AI is very bad at reasoning about cause and effect, understanding what someone is thinking and feeling, determining what information is relevant and something it has learned in one situation: generalizing to other situations. No one currently knows which route will lead to human-like AI the fastest, but let me outline one possible route, as described by cognitive scientist Gary Marcus and computer science professor Ernest Davis in their book Rebooting AI – Building AI we can trust.
They propose to first combine the two main streams in AI – machine learning and machine reasoning. That is already an incredibly difficult task, which has only recently begun. We would then have to supplement that combination with a mix of new AI tools that have yet to be developed. Like humans, machines will also need some innate skills: a basic understanding of time, space and causality. This allows them to develop an intuitive psychological and physical understanding. What’s on other people’s minds? How do objects behave under the influence of gravity?
In addition, machines must learn to reason with uncertain information. All these skills must connect machines to perceiving the environment, manipulating it and, of course, language. This enables them to build cognitive models of the world just like humans. With such a total package, machines can learn a lot of different skills in a flexible, human-like way.
Lots of small chances
If we succeeded in building human-like AI, then AI should understand how it works. That is necessary in order to improve. After that, AI should succeed in making itself super intelligent and also develop the will to harm people. Then the super-intelligent AI would also have to get its own energy. And finally, humans should no longer be able to pull the plug on this AI.
Each step has a very small chance. The total chance that AI would threaten humans is then a multiplication of all those small chances that come with each step, and that then becomes an even smaller chance.
So super-intelligent AI posing an existential threat to humans is possible, but extremely unlikely. And if super-intelligent AI will emerge, we can see it coming long in advance, precisely because it still requires so many fundamental breakthroughs. We can take measures well in time: in the AI technology itself, in our dealings with it, in laws and regulations and in international treaties (similar to treaties that prohibit the use of biological and chemical weapons or specific technologies such as laser weapons).
The good news is that AI doesn’t have to be super intelligent, doesn’t have to look like humans, and doesn’t have to have consciousness. AI systems simply have to be smart, useful assistants that can solve certain problems better and faster than humans. They are getting better at this. We are facing major societal challenges – geopolitical tensions, climate change, energy transition, digital transition, migration, globalization – and we desperately need AI to solve them.
Bennie Mols: Smart, smarter, smartest – How artificial intelligence gives people a turbo boost’ Veen Media, €9.99.
Bennie Mols (Swalmen, 2 June 1969) is a science journalist, author and speaker, specialized in artificial intelligence and robotics. He has published several popular science books (among others Hallo robot and Turings Tango) and regularly speaks on radio and TV. Mols was trained as a physicist and philosopher and obtained a PhD in physics. The book presentation of Bad, worse, worst is Wednesday 31 May at 4.30 pm at Scheltema on the Rokin in Amsterdam. Entrance is free.
#wiggle #people #pull #plug