Obviously we live in a nuclear world and you probably have a computer or two within arm’s reach now. In fact, it is these computers — and the exponential progress in computing in general — that are now the subject of the most important predictions in society. It is commonly believed that ever-increasing computing power will be a boon to mankind. But what if we are wrong again? Could an artificial superintelligence cause us great harm instead? Our disappearance?
As history teaches, never say never.
It seems only a matter of time before computers become smarter than humans. This is one prediction that we can be reasonably sure of because we are already seeing it. Many systems have achieved superhuman abilities at certain tasks, such as playing Scrabble, chess, and poker, where humans now routinely lose to bots across the board.
But advances in computer science will lead to systems with increasingly General levels of intelligence: algorithms capable of solving complex problems in several areas. Imagine a single algorithm that can beat a grandmaster, write a novel, compose a catchy tune, and drive a car through city traffic.
According to a 2014 poll of experts, by 2050 there is a 50 percent chance that “human-level machine intelligence” will be achieved, and by 2075 there is a 90 percent chance. the explicit goal of creating artificial general intelligence is a stepping stone to artificial superintelligence (ASI) that will not only work as well as humans in all areas of interest, but much more surpass our best ability.
The success of any of these projects would be the most significant event in the history of mankind. Suddenly our species will connect on the planet with something smarter than we. The benefits are easy to imagine: ASI could help cure diseases like cancer and Alzheimer’s, or clean up the environment.
But the arguments for why the ASI could destroy us are also strong.
Surely no research organization would develop a malevolent Terminator-style ASI hell-bent on destroying humanity, right? Unfortunately, this is not a concern. If the ASI destroys us all, it will almost certainly be by accident.
Since ASI cognitive architectures may be fundamentally different from ours, they may be most unpredictable thing in our future. Consider those AIs that are already beating humans at games: in 2018, one algorithm playing an Atari Q*bert game won by exploiting a loophole “that no human player is thought to have…ever discovered.” Another program became an expert in the digital game of hide-and-seek with a strategy that “researchers never expected.”
If we can’t predict what algorithms playing childish games will do, how can we be sure of a machine with far superior problem-solving skills to humans? What if we programmed the ASI to bring peace to the world, and it hacked government systems to launch every nuclear weapon on the planet, believing that if there were no humans, there would be no more war? Yes, we could explicitly program not to. what. But what about his plan B?
Indeed, there is endless the number of ways ASI can “solve” global problems with catastrophic consequences. Given any given set of constraints on ASI behavior, however exhaustive they may be, smart theorists using their simply “human” intelligence can often find ways in which things go very wrong; You can bet ASI could come up with more.
As for turning off the destructive ASI, a sufficiently intelligent system should quickly realize that one of the ways to never achieve its goals is to cease to exist. Logic dictates that he does his best to keep us from turning him off.
It’s not clear if humanity will ever be ready for superintelligence, but we’re definitely not ready right now. For all our global instability and still nascent understanding of technology, adding ASI would be like lighting a match next to a fireworks factory. Artificial intelligence research should slow down or even stop. And if the researchers don’t make that decision, governments should do it for them.
Some of these researchers have bluntly dismissed concerns that advanced artificial intelligence could be dangerous. And they may be right. It may turn out that any warning is just “talk”, and ASI is completely harmless – or even completely impossible. Because I can’t predict the future.
The problem is, they can’t either.