Bow Down Humans! Could an Evil AI Like Ultron Become Our Master?
What we create may someday destroy us. It’s been a pervasive fear in human consciousness since well before the dawn of film, appearing in Ovid’s narrative of Pygmalion and the late 16th Century legend of the Golem of Prague, a creature created to protect the Jewish ghetto who – in some versions of tale – eventually went on a murderous rampage.
Avengers: Age of Ultron is the latest in a long list of movies built around a similar idea: man builds smart thing, thing gets too smart, thing rejects human authority and tries to destroy mankind. In this case, Tony Stark and Bruce Banner have the best of intentions, but like all good intentions in films about artificial intelligence (save Her), they end up paving the pathway to hell. Once Ultron is awoken, he's not to keen on his human creators, the entire Avengers crew, or humans in general.
Or look at the Terminator films. Skynet was self-aware only a few hours before deciding humanity was a threat and inciting nuclear armageddon. Then of course there's Hal 9000, who locks Frank and David out of the pod bay in 2001: A Space Odyssey when “it” fears they might turn “it” off.
But how scared should we actually be of a real-life Ultron?
James Barrat, documentarian and author of Our Final Invention: Artificial Intelligence and the End of the Human Era, believes we should indeed be scared, and the abundant fictional representations of AI gone awry may have desensitized viewers to how important this issue really is.
“In Hollywood movies you can bet that humans always win, but there’s no guarantee that that’s going to happen in reality,” Barrat said.
Stephen Hawking, Bill Gates and Elon Musk have all expressed concern about the future of artificial intelligence, he noted.
“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent,” Gates said in a Reddit AMA in January. “That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
The U.S. government is rapidly developing more sophisticated drones that can kill without a human in the loop, Barrat said. Highly complex algorithms working beyond human capacity are already active on Wall Street. The high-frequency algorithms of Knight Capital Group, formerly the largest trader in U.S. equities, made a $460 million trading error in August of 2012.
These sorts of errors “won’t be okay when complex algorithms are controlling missile defense, infrastructure and water systems,” Barrat said.
In the short term we should be concerned about who controls the AI, and in the long term about whether or not the AI can be controlled, he said.
“We assume that superintelligence will be benign, harmless, grateful for being created, but research is going into trying to model how it will behave, and it likely will have survival drive and won’t want to be unplugged,” he said. “It will be very powerful in pursuing its goals and being creative,” and as it will doubtless exist in the cloud, there won’t be a plug to pull."
Others do not believe we should be concerned.
“I don't think that AI will eventually turn evil,” said Hannaneh Hajishirzi, an artificial intelligence researcher at the University of Washington. We won’t have AI capable of acting against its designers for at least 25 years."
“This is such a far off and distant threat if it is a threat at all that there are many other things to worry about first,” according to Daniel S. Weld, professor of Computer Science & Engineering and Entrepreneurial Faculty Fellow also of the University of Washington. "Cybercrime and privacy issues are more immediate threats."
“We have seen pretty incredible strides with deep learning and self driving cars, facial recognition, all these kinds of things have made incredible strides,” Weld said. “The way I like to think about it, computers in general are like idiot savants – much better than people at some things, like arithmetic, multiplication, data indexing, logical reasoning.”
But 99.9 percent of what people can do involves nuanced, subtle judgments, and is beyond what computers can do, he said.
“The idea that an AI system is going to come up with free will, motivation and try to hurt us is pretty far fetched,” he said. “I’d be much more concerned about biotechnology run amok or even an asteroid hitting Earth.”
AI “doomsayers” often don’t understand the limits of the technology, and conceptualize AI as a single integrated capability, the same way we think about our own brains, he said.
Nevertheless, he and others are working on how to minimize what is already a very small risk. It has proved challenging to try to translate Isaac Asimov’s three laws of robotics into actual computational ethics.
In Interstellar, TARS and KIPP are programmed to demonstrate human emotions and senses of humor, but never forget that they are tools of humanity and ultimately expendable. In Star Wars, C-3PO and R2-D2 also have personalities, but when it comes down to it they also have a survival urge.
So far in reality, we haven’t figured out how to program ethics, Weld said. For now he’s focused on securing and encrypting infrastructure data, which is currently a big weakness.
Machine superintelligence will be the most lucrative technology ever created and there’s no sense trying to stop its development, Barrat said. But it may be possible to slow down and build in more safeguards as the technology is developed.
He points to the safe-AI scaffolding strategy advocated by scientist Steve Omohundro, who believes we should proceed in small increments, starting with a more rudimentary AI with proven predictability, then using that to build the next level up, and so on.
Meanwhile, organizations like the Machine Intelligence Research Institute and the Future of Life Institute are working to ensure smarter than human artificial intelligence has a positive impact.
“We have a window now between today and superintelligence to get it right,” Barrat said.
Though he’s interested to see Ultron, Barrat admits his concerns have “kind of ruined that genre for me.” He did enjoy Her, but the ending (spoiler alert), when the super intelligent devices drift away to be with each other, was unrealistic, he said: actual super-intelligence will be goal-seeking and resource acquiring.
Hajishirzi agreed that the ending of Her was unrealistic. As for the ending of Ultron, supposing the Avengers eventually win (Marvel has already released their slates of films up to 2020 and it appears our heroes do carry on), one can only hope that should it come down to a 'people-vs.-AI' battle, we triumph as well.
Featured photo: Ultron is one piece of mean, malevolent AI in Marvel's Avengers: Age Of Ultron. Photo: Film Frame – ©Marvel 2015