On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss the future of artificial intelligence. This refers to an uncontrolled hyper-leap in the cognitive ability of AI that Elon Musk and physicist Stephen Hawking worry could one day spell doom for humanity.
The conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.
Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence are putting AI-driven products front-and-center in our lives. It’s no secret countless companies are hiring artificial intelligence researchers and putting thousands of dollars into the race for better algorithms and smarter computers at an unprecedented rate.
In a presentation he gave at the Puerto Rico conference, AI researcher Jaan Tallinn recalled a lunchtime meeting where Demis Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade game Breakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,”.
Deciding the dos and don’ts of scientific research is the kind of baseline ethical work done by molecular biologists during the 1975 Asilomar Conference on Recombinant DNA, where they agreed on safety standards designed to prevent manmade genetically modified organisms from posing a threat to the public. The Asilomar conference however had a much more concrete result than the Puerto Rico AI confab.
At the Puerto Rico conference, attendees signed a letter outlining the research priorities for AI—study of AI’s economic and legal effects, for example, and the security of AI systems. And the previous day, Elon Musk kicked in $10 million to help pay for this research. These are significant first steps toward keeping robots from ruining the economy or generally running amok.
According to the letter, autonomous weapons that select and engage targets without human intervention – such as armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria – could be feasible in years, not decades.
Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group, the letter said. “We therefore believe that a military AI arms race would not be beneficial for humanity,” wrote the authors of the open letter.
Pledging not to build the Terminator is but one step. AI companies such as Google must think about the safety and legal liability of their self-driving cars, whether robots will put humans out of a job, and the unintended consequences of algorithms that would seem unfair to humans. Is it, for example, ethical for Amazon to sell products at one price to one community, while charging a different price to a second community? What safeguards are in place to prevent a trading algorithm from crashing the commodities markets? What will happen to the people who work as bus drivers in the age of self-driving vehicles?
Guillermo Ruiz Henares