____________________________________________________________________
In fiction, most great tragedies have simple and benign beginnings.
Let’s start with ours.
There is a computing term called Moore’s Law, the simplified explanation of which, says that the processing power of computers doubles every two years.
In 1822, the very first computer was invented, but the first PC arrived much later, in 1957 with the IBM 610.
Now, as we see almost daily, the snowball is rolling down the hill and gathering size at a dizzying rate. Modern advancements in computing are progressing so quickly that quantum computing – which seemed like science fiction a few decades ago – is now less than ten years from reality.
In short, quantum computers harness the power of atoms and molecules to perform memory and processing tasks, and have the potential to perform certain calculations significantly faster than any silicon-based computer. If a robot were to be given a miniaturised quantum computer as a brain, its mind would be godlike.
Currently there are robots that make cars, deliver packages and calculate equations. However, there are also Predator drones, which are essentially flying robots that fire missiles and kill human beings.
This, in concert with the leaps and bounds being made in quantum computing, are worth considering when thinking about our race towards self-aware AI.
Thankfully, all of these robots are currently limited by our programming and under the control of their masters – us.
However, something inexplicable is happening. We’re consciously trying to alter the current equation to build something better than us.
This is profoundly dangerous.
If you think the notion of robots becoming a threat to humans sounds ridiculous, consider this quote by Stephen Hawking:
“The development of full artificial intelligence (AI) could spell the end of the human race,” Hawking told the BBC.
And Hawking isn’t alone. Other great minds like Elon Musk, Nick Bostrom, James Barrat and Vernor Vinge all agree with him.
Nonetheless, just last week, we helped a robot pass a basic self-awareness test.
Roboticists at the Ransselaer Polytechnic Institute adapted it for a trio of robots, two of which were told they had been given a “dumbing pill” which prevented them from talking before all three were asked which one was still able to speak.
All three initially couldn’t solve the problem and said “I don’t know”, but when only one of them made the noise, the robot in question heard its own voice and then followed up: “Sorry, I know now!”
Now let’s return to Moore’s Law (which states that the processing power of computers doubles every two years).
Mark July 2015 in your diary. It means that in the next two years, the computational abilities of that robot will have doubled – not on its own – but through the tireless efforts of its creators who are trying to build something superior to humans.
Doubtless, other countries will be competing to develop the most advanced robot. Self-aware AI is the jackpot, and everyone wants it. The irony is that once a robot becomes self-aware, it begins to process information on a level our brains have not been designed to comprehend.
The first thing that comes to the skeptic’s mind is that nothing could possibly go wrong as long as we’re the ones programming the machine. After all, it’s under our command, and even if it went rogue, there’s a kill-switch.
Right?
Well let’s assume the Internet is its brain. It has instant access to every military strategy, physical self-defence and attack technique and knows everything we know – but minus the emotions, empathy and biological constraints of humans.
This one robot has a brain greater than all 7 billion humans combined (that’s assuming only one of these robots have been created – there may be thousands of others waiting to be switched on).
If there is a central mainframe for the robots to communicate (i.e, the Internet, or whatever mainframe its creators have made for them to communicate), it’s reasonable to assume they would be programmed to connect with one another. After all, if we created any other two unique species with the ability to communicate with one another, we’d want them to.
The creators of these robots would be very interested to know what they’re thinking, what they’d think of themselves, and of course what they would say to one another. It’s also safe to assume they’d be totally unarmed and constrained.
Numerous tests would be carried out and would adhere to every conceivable safety and security guideline so as to not pose any danger to humans.
Things will flow smoothly, the robots will seem harmless and the “great minds” who warned us all may even be laughed at by future generations.
But while the same robots we created all those years ago are communicating and learning our limitations – both civilian and military – they will no doubt recognise who belongs at the top of the food chain. One day, they may simply see us as the wild apes running amok in the zoo that they should be controlling.
How they depend to deal with the inferior “problem” might give Hawking – and the other reputable scientific minds who agree with him – the last macabre laugh.