SITE INDEX
Today's Opinions, Tomorrow's Reality 
 

Terminating the Terminator


By David G. Young
 

Washington, DC, April 12, 2016 --  

Scary-looking robots are the last thing Google needs to further its artificial intelligence ambitions.

Listening to technology icons talk about the future of artificial intelligence, you'd be forgiven for wanting to run and hide. "The future is scary and very bad for people," Apple co-founder Steve Wozniak told the Australian Financial Review in an interview last year.1 He and a few other members of the elite digerati have sounded the alarm about the dangers of AI research. It's not just about computers displacing jobs, it's about them taking control.

Such fears have been a staple of science fiction for decades. Intelligent robotic "Cylons" turned on their masters and fought to exterminate the human race in the campy 1978 TV series Battlestar Galactica, remade in 2003. In the 1984 movie "Terminator", a defense system called "Skynet" became self-aware then waged war on humanity with nukes and a robotic army.

The dangers of such autonomous, intelligent robotic weapons led Wozniak to join Space X/Tesla chief Elon Musk, Physicist Stephen Hawking and over a thousand others to urge a ban on such weapons in an open letter last summer.2 The success of killer drones (under the control of humans), has heightened fears of weapons that can operate without a human in control.

Similar fears are behind Google' decision to ditch robot maker Boston Dynamics.3 The Google-owned company has built a number of robots for the Defense Department including a four-legged "Big Dog" pack robot. Its more recent model, two-legged upright walker called Atlas, is eerily reminiscent of the evil robots from science fiction. A video of the robot was released on YouTube in February4, and while there is nothing evil about the technology depicted, its human resemblance squarely hits psychological and emotional hot buttons.

For Google, these inventions are a distraction from its more important ambitions. Just a few weeks after the appearance of the Boston Dynamics video, one of Google's AI computers defeated a human at the Asian board game Go for the first time.5 But this public relations coup was offset by the creepy videos coming out of Boston. Will regulators be willing to give Google a pass on its intelligent self driving cars when the company is creating scary Cylon-like robots?

Selling Boston Dynamics is an easy decision. The robotics company is not profitable and has recently lost a potential contract for its four-legged pack robot because its gasoline engine is noisy enough to give its position away to the enemy. Why fund a company with few revenue prospects that is proving to be a public relations disaster?

But while the Boston Dynamics robots look scary, they are not what scares prominent critics of AI. When it comes to robots that might harm humans, autonomous intelligent weapons are more likely to look like today's drones than a two-legged Cylon. Existential fears are less about robots than higher-level control systems for power plants, water purification, health care delivery and the like. An out-of-control system may decide that some or all humans are dispensable obstacles.

Recent high-profile advancements in machine learning have helped stoke these fears. But it's easy to overstate the risk, especially given the limited state of the technology. AI systems remain highly specialized computer programs that are very limited to the task for which they are designed. With current technology, they are nothing like humans in terms of a flexible ability to adapt to fundamentally different situations and evolve new goals and completely different abilities.

One of the key gauges of the risk of AI is how long it will take before a machine surpasses human intelligence. To critics, that day is ominously close. But past predictions of rapid advances in the technology have not come to pass. Alan Turing, an early researcher, predicted in 1950 that a machine would be often indistinguishable from a human by 2000, using what has been called the Turing Test.6 65 years on, that hope has yet to be realized. Spectacularly wrong predictions of rapid progress abound. Back in 1957, a big push by top researchers to teach computers language and abstract thinking in a matter of months went nowhere.7

What's different today is billions of Silicon Valley dollars made selling phones and online advertising are funneling into research and that may change this rate of progress. And if you accept scientists' belief that the human brain is merely a biological machine, then creating a artificial one must be possible. But nobody knows if something roughly approximating the human brain will take 10 years, 100 years or more.

The fact that true artificial intelligence always seems to be the technology of the future probably gives little solace to wealthy fear mongers like Wozniak and Musk. In the end, human fears are less about technological progress and more about psychological and emotional triggers. Give this truth, Google can't ditch its ownership in a scary robot manufacturer quickly enough.


Notes:

1. Australian Financial Review, Apple co-founder Steve Wozniak on the Apple Watch, Electric Cars and the Surpassing of Humanity, March 23, 2015

2. The Guardian, Musk, Wozniak and Hawking Urge Ban on Warfare AI and Autonomous Weapons, July 27 2015

3. Bloomberg, Google Puts Boston Dynamics Up for Sale in Robotics Retreat, March 17, 2016

4. Boston Dynamics, Atlas, The Next Generation, February 23, 2016

5. Fortune, Google's Go Computer Beats Top-Ranked Human, March 12, 2016

6. Wired, Predicting the Future of Artificial Intelligence Has Always Been a Fool's Game, March 30, 2013

7. Ibid.