Future of Life Institute’s recent milestones in AI safety

In January, many months of effort by FLI’s founders and volunteers finally came to fruition – the Puerto Rico conference, open letter and grants program announcement took place in rapid succession. The conference was a resounding success according to many of the people there, who were impressed with the quality of the ideas presented and the way it was organized. There were opportunities for the attendees to engage with each other at different levels of structure, from talks to panels to beach breakout groups. The relaxed Caribbean atmosphere seemed to put everyone at ease, and many candid and cooperative conversations happened between attendees with rather different backgrounds and views.

It was fascinating to observe many of the AI researchers get exposed to various AI safety ideas for the first time. Stuart Russell’s argument that the variables that are not accounted for by the objective function tend to be pushed to extreme values, Nick Bostrom’s presentation on takeoff speeds and singleton/multipolar scenarios, and other key ideas were received quite well. One attending researcher summed it up along these lines: “It is so easy to obsess about the next building block towards general AI, that we often forget to ask ourselves a key question – what happens when we succeed?”.

A week after the conference, the open letter outlining the research priorities went public. The letter and research document were the product of many months of hard work and careful thought by Daniel Dewey, Max Tegmark, Stuart Russell, and others. It was worded in optimistic and positive terms – the most negative word in the whole thing was “pitfalls”. Nevertheless, the media’s sensationalist lens twisted the message into things like “experts pledge to rein in AI research” to “warn of a robot uprising” and “protect mankind from machines”, invariably accompanied by a Terminator image or a Skynet reference. When the grants program was announced soon afterwards, the headlines became “Elon Musk donates…” to “keep killer robots at bay”, “keep AI from turning evil”, you name it. Those media portrayals shared a key misconception of the underlying concerns, that AI has to be “malevolent” to be dangerous, while the most likely problematic scenario in our minds is a misspecified general AI system with beneficial or neutral objectives. While a few reasonable journalists actually bothered to get in touch with FLI and discuss the ideas behind our efforts, most of the media herd stampeded ahead under the alarmist Terminator banner.

The open letter expresses a joint effort by the AI research community to step up to the challenge of advancing AI safety as responsible scientists. My main worry about this publicity angle is that this might be the first major exposure to AI safety concerns for many people, including AI researchers who would understandably feel attacked and misunderstood by the media’s framing of their work. It is really unfortunate to have some researchers turned away from the cause of keeping AI beneficial and safe without even engaging with the actual concerns and arguments.

I am sometimes asked by reporters whether there has been too much emphasis on the superintelligence concerns that is “distracting” from the more immediate AI impacts like the automation of jobs and autonomous weapons. While the media hype is certainly not helpful towards making progress on either the near-term or long-term concerns, there is a pervasive false dichotomy here, as both of these domains are in dire need of more extensive research. The near-term economic and legal issues are already on the horizon, while the design and forecasting of general AI is a complex interdisciplinary research challenge that will likely take decades, so it is of utmost importance to begin the work as soon as possible.

The grants program on AI safety, fueled by Elon Musk’s generous donation, is now well under way, with the initial proposals due March 1. The authors of the best initial proposals will be invited to submit a more detailed full proposal by May 17. I hope that our program will help kickstart the emerging subfield of AI safety, stimulate open discussion of the ideas among the AI experts, and broaden the community of researchers working on these important and difficult questions. Stuart Russell put it well in his talk at the Puerto Rico conference: “Solving this problem should be an intrinsic part of the field, just as containment is a part of fusion research. It isn’t ‘Ethics of AI’, it’s common sense!”.

Advertisements

One thought on “Future of Life Institute’s recent milestones in AI safety

  1. clinearts

    Public perception as tempered by film and television it is incomplete and should be updated by better communication between scientists and the media. New technologies are normally first employed by the military and weaponized AI seems likely. Secondly self preservation or planet preservation may lead to unwanted strategies and consequences.

    Liked by 1 person

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s