AI Safety Highlights from NIPS 2016

battlo

This year’s Neural Information Processing Systems conference was larger than ever, with almost 6000 people attending, hosted in a huge convention center in Barcelona, Spain. The conference started off with two exciting announcements on open-sourcing collections of environments for training and testing general AI capabilities – the DeepMind Lab and the OpenAI Universe. Among other things, this is promising for testing safety properties of ML algorithms. OpenAI has already used their Universe environment to give an entertaining and instructive demonstration of reward hacking that illustrates the challenge of designing robust reward functions.

I was happy to see a lot of AI-safety-related content at NIPS this year. The ML and the Law symposium and Interpretable ML for Complex Systems workshop focused on near-term AI safety issues, while the Reliable ML in the Wild workshop also covered long-term problems. Here are some papers relevant to long-term AI safety:

Inverse Reinforcement Learning

Cooperative Inverse Reinforcement Learning (CIRL) by Hadfield-Menell, Russell, Abbeel, and Dragan (main conference). This paper addresses the value alignment problem by teaching the artificial agent about the human’s reward function, using instructive demonstrations rather than optimal demonstrations like in classical IRL (e.g. showing the robot how to make coffee vs having it observe coffee being made). (3-minute video)

cirl

ssrlGeneralizing Skills with Semi-Supervised Reinforcement Learning by Finn, Yu, Fu, Abbeel, and Levine (Deep RL workshop). This work addresses the scalable oversight problem by proposing the first tractable algorithm for semi-supervised RL. This allows artificial agents to robustly learn reward functions from limited human feedback. The algorithm uses an IRL-like approach to infer the reward function, using the agent’s own prior experiences in the supervised setting as an expert demonstration.

interactive-irlTowards Interactive Inverse Reinforcement Learning by Armstrong and Leike (Reliable ML workshop). This paper studies the incentives of an agent that is trying to learn about the reward function while simultaneously maximizing the reward. The authors discuss some ways to reduce the agent’s incentive to manipulate the reward learning process.off-switch

Should Robots Have Off Switches? by Milli, Hadfield-Menell, and Russell (Reliable ML workshop). This poster examines some adverse effects of incentivizing artificial agents to be compliant in the off-switch game (a variant of CIRL).

Safe exploration

safemdpSafe Exploration in Finite Markov Decision Processes with Gaussian Processes by Turchetta, Berkenkamp, and Krause (main conference). This paper develops a reinforcement learning algorithm called Safe MDP that can explore an unknown environment without getting into irreversible situations, unlike classical RL approaches.intrinsic_fear

Combating Reinforcement Learning’s Sisyphean Curse with Intrinsic Fear by Lipton, Gao, Li, Chen, and Deng (Reliable ML workshop). This work addresses the ‘Sisyphean curse’ of DQN algorithms forgetting past experiences, as they become increasingly unlikely under a new policy, and therefore eventually repeating catastrophic mistakes. The paper introduces an approach called ‘intrinsic fear’, which maintains a model for how likely different states are to lead to a catastrophe within some number of steps.

~~~~~

Most of these papers were related to inverse reinforcement learning – while IRL is a promising approach, it would be great to see more varied safety material at the next NIPS (fingers crossed for some innovative contributions from Rocket AI!). There were some more safety papers on other topics at UAI this summer: Safely Interruptible Agents (formalizing what it means to incentivize an agent to obey shutdown signals) and A Formal Solution to the Grain of Truth Problem (providing a broad theoretical framework for multiple agents learning to predict each other in arbitrary computable games).

(Cross-posted to Approximately Correct and the FLI blog. Thanks to Jan Leike, Zachary Lipton, and Janos Kramar for providing feedback on this post.)

Advertisements

OpenAI unconference on machine learning

Last weekend, I attended OpenAI’s self-organizing conference on machine learning (SOCML 2016), meta-organized by Ian Goodfellow (thanks Ian!). It was held at OpenAI’s new office, with several floors of large open spaces. The unconference format was intended to encourage people to present current ideas alongside with completed work. The schedule mostly consisted of 2-hour blocks with broad topics like “reinforcement learning” and “generative models”, guided by volunteer moderators. I especially enjoyed the sessions on neuroscience and AI and transfer learning, which had smaller and more manageable groups than the crowded popular sessions, and diligent moderators who wrote down the important points on the whiteboard. Overall, I had more interesting conversation but also more auditory overload at SOCML than at other conferences.

To my excitement, there was a block for AI safety along with the other topics. The safety session became a broad introductory Q&A, moderated by Nate Soares, Jelena Luketina and me. Some topics that came up: value alignment, interpretability, adversarial examples, weaponization of AI.

image_1-19ca2ee3d92cf2f6ec91c4a95cd7ae8a48a14bc52f25fb73eb68ac623043477d

AI safety discussion group (image courtesy of Been Kim)

One value alignment question was how to incorporate a diverse set of values that represents all of humanity in the AI’s objective function. We pointed out that there are two complementary problems: 1) getting the AI’s values to be in the small part of values-space that’s human-compatible, and 2) averaging over that space in a representative way. People generally focus on the ways in which human values differ from each other, which leads them to underestimate the difficulty of the first problem and overestimate the difficulty of the second. We also agreed on the importance of allowing for moral progress by not locking in the values of AI systems.

Nate mentioned some alternatives to goal-optimizing agents – quantilizers and approval-directed agents. We also discussed the limitations of using blacklisting/whitelisting in the AI’s objective function: blacklisting is vulnerable to unforeseen shortcuts and usually doesn’t work from a security perspective, and whitelisting hampers the system’s ability to come up with creative solutions (e.g. the controversial move 37 by AlphaGo in the second game against Sedol).

Been Kim brought up the recent EU regulation on the right to explanation for algorithmic decisions. This seems easy to game due to lack of good metrics for explanations. One proposed metric was that a human would be able to predict future model outputs from the explanation. This might fail for better-than-human systems by penalizing creative solutions if applied globally, but seems promising as a local heuristic.

Ian Goodfellow mentioned the difficulties posed by adversarial examples: an imperceptible adversarial perturbation to an image can make a convolutional network misclassify it with very high confidence. There might be some kind of No Free Lunch theorem where making a system more resistant to adversarial examples would trade off with performance on non-adversarial data.

We also talked about dual-use AI technologies, e.g. advances in deep reinforcement learning for robotics that could end up being used for military purposes. It was unclear whether corporations or governments are more trustworthy with using these technologies ethically: corporations have a profit motive, while governments are more likely to weaponize the technology.

unconference_board.jpg

More detailed notes by Janos coming soon! For a detailed overview of technical AI safety research areas, I highly recommend reading Concrete Problems in AI Safety.

Cross-posted to the FLI blog.

Looking back at my grad school journey

I recently defended my PhD thesis, and a chapter of my life has now come to an end. It feels both exciting and a bit disorienting to be done with this phase of much stress and growth. My past self who started this five years ago, with a very vague idea of what she was getting into, was a rather different person from my current self.

I have developed various skills over these five years, both professionally and otherwise. I learned to read papers and explain them to others, to work on problems that take months rather than hours and be content with small bits of progress. I used to believe that I should be interested in everything, and gradually gave myself permission not to care about most topics to be able to focus on things that are actually interesting to me, developing some sense of discernment. In 2012 I was afraid to comment on the LessWrong forum because I might say something stupid and get downvoted – in 2013 I wrote my first post, and in 2014 I started this blog. I went through the Toastmasters program and learned to speak in front of groups, though I still feel nervous when speaking on technical topics, especially about my own work. I co-founded a group house and a nonprofit, both of which are still flourishing. I learned how to run events and lead organizations, starting with LessWrong meetups and the Harvard Toastmasters club, which were later displaced by running FLI.

I remember agonizing over whether I should do a PhD or not, and I wish I had instead spent more time deciding where and how to do it. I applied to a few statistics departments in the Boston area and joined the same department that Janos was in, without seriously considering computer science, even though my only research experience back in undergrad was in that field. The statistics department was full of interesting courses and brilliant people that taught me a great deal, but the cultural fit wasn’t quite right and I felt a bit out of place there. I eventually found my way to the computer science department at the end of my fourth year, but I wish I had started out there to begin with.

My research work took a rather meandering path that somehow came together in the end. My first project was part of the astrostatistics seminar, which I was not particularly motivated about, but I expected myself to be interested in everything. I never quite understood what people were talking about in the seminar or what I was supposed to be doing, and quietly dropped the project at the end of my first year when leaving for my quantitative analyst internship at D.E.Shaw. The internship was my first experience in industry, where I learned factor analysis and statistical coding in Python (the final review from my manager boiled down to “great coder, research skills need work”). In second year, my advisor offered me a project that was unfinished by his previous students, which would take a few months to polish up. The project was on a new method for classification and variable selection called SBFC. I dug up a bunch of issues with the existing model and code, from runtime performance to MCMC detailed balance, and ended up stuck on the project for 3 years. During that time, I dabbled with another project that sort of petered out, did a Google internship on modeling ad quality, and sank a ton of time into FLI. In the middle of fourth year, SBFC was still my only project, and things were not looking great for graduating.

This was when I realized that the part of statistics that was interesting to me was the overlap with computer science and AI, a.k.a. machine learning. I went to the NIPS conference for the first time, and met a lot of AI researchers – I didn’t understand a lot of their work, but I liked the way they thought. I co-organized FLI’s Puerto Rico conference and met more AI people there. I finally ventured outside the stats department and started sitting in on ML lab meetings at the CS department, which mostly consisted of research updates on variational autoencoders that went right over my head. I studied a lot to fill the gaps in my ML knowledge that were not covered by my statistics background, namely neural networks and reinforcement learning (still need to read Sutton & Barto…). To my surprise, many people at the ML lab were also transplants from other departments, officially doing PhDs in math or physics.

That summer I did my second internship at Google, on sum-product network models (SPNs) for anomaly detection in the Knowledge Graph. I wondered if it would result in a paper that could be shoehorned into my thesis, and whether I could find a common thread between SPNs, SBFC and my upcoming project at the ML lab. This unifying theme turned out to be interpretability – the main selling point of SBFC, an advantage of SPNs over other similarly expressive models, and one of my CS advisor’s interests. Working on interpretability was a way to bring more of the statistical perspective into machine learning, and seemed relevant to AI safety as well. With this newfound sense of direction, in a new environment, my fifth year had as much research output as the previous three put together, and I presented two workshop posters in 2016 – on SPNs at ICLR, and on RNN interpretability at ICML.

Volunteering for FLI during grad school started out as a kind of double life, and ended up interacting with my career in interesting ways. For a while I didn’t tell anyone in my department that I co-founded a nonprofit trying to save the world from existential risk, which was often taking up more of my time than research. However, FLI’s outreach work on AI safety was also beneficial to me – as one of the AI experts on the FLI core team, I met a lot of AI researchers who I may not have connected with otherwise. When I met the DeepMind founders at the Puerto Rico conference, I would not have predicted that I’ll be interviewing for their AI safety team a year later. The two streams of my interests, ML and AI safety, have finally crossed, and the double life is no more.

What lessons have I drawn from the grad school experience, and what advice could I give to others?

  • Going to conferences and socializing with other researchers was super useful and fun. I highly recommend attending NIPS and ICML even if you’re not presenting.
  • Academic departments vary widely in their requirements. For example, the statistics department expected PhD students to teach 10 sections (I got away with doing 5 sections and it was still a lot of work), while the CS department only expected 1-2 sections.
  • Internships were a great source of research experience and funding (a better use of time than teaching, in my opinion). It’s worth spending a summer interning at a good company, even if you are definitely going into academia.
  • Contrary to common experience, writer’s block was not an obstacle for me. My actual bottleneck was coding, debugging and running experiments, which was often tedious and took over half of my research time, so it’s well worth optimizing those aspects of the work.
  • The way FLI ended up contributing to my career path reminds me of a story about Steve Jobs sitting in on a calligraphy class that later turned out to be super relevant to creating snazzy fonts for Apple computers. I would recommend making time for seemingly orthogonal activities during grad school that you’re passionate about, both because they provide a stimulating break from research, and because they could become unexpectedly useful later.

Doing a PhD was pretty stressful for me, but ultimately worthwhile. A huge thank you to everyone who guided and supported me through it!

Highlights from the Deep Learning Summer School

montreal_university_5

A few weeks ago, Janos and I attended the Deep Learning Summer School at the University of Montreal. Various well-known researchers covered topics related to deep learning, from reinforcement learning to computational neuroscience (see the list of speakers with slides and videos). Here are a few ideas that I found interesting in the talks (this list is far from exhaustive):

Cross-modal learning (Antonio Torralba)

You can do transfer learning in convolutional neural nets by freezing the parameters in some layers and retraining others on a different domain for the same task (paper). For example, if you have a neural net for scene recognition trained on real images of bedrooms, you could reuse the same architecture to recognize drawings of bedrooms. The last few layers represent abstractions like “bed” or “lamp”, which apply to drawings just as well as to real images, while the first few layers represent textures, which would differ between the two data modalities of real images and drawings. More generally, the last few layers are task-dependent and modality-independent, while the first few layers are the opposite.

cross-modal

Importance weighted autoencoders (Ruslan Salakhutdinov)

vaeThe variational autoencoder (VAE) is a popular generative model that constructs an autoencoder out of a generative network (encoder) and recognition network (decoder). It then trains these networks to optimize a variational approximation of the posterior distribution by maximizing a lower bound on the log likelihood. IWAE is a variation that tightens the variational lower bound by relaxing the assumptions about the form of the posterior distribution . While the VAE maximizes a lower bound based on a single sample from the recognition distribution, the IWAE lower bound uses a weighted average over several samples. Applying importance weighting over several samples avoids the failure mode where the VAE objective penalizes models that produce even a few samples through the recognition network that don’t fit the posterior from the generative network, and taking several samples allows for better approximation of the posterior and thus a tighter lower bound.(The IWAE paper also gives a more intuitive introduction to VAE than the original paper, in my opinion.)

Variations on RNNs (Yoshua Bengio)

hierarchical rnnThis talk mentioned a few recurrent neural network (RNN) models that were unfamiliar to me. Variational RNNs introduce some elements of variational autoencoders into RNNs by adding latent variables (z) into the top hidden layer (paper). The RNN internal structure is entirely deterministic besides the output probability model, so it can be helpful to inject a higher-level source of noise to model highly structured data (e.g. speech). This was further extended with multiresolution RNNs, which are variational and hierarchical (paper). Another interesting model is real-time recurrent learning, a more biologically plausible alternative to backpropagation through time, where gradients are computed in an online feedforward manner without revisiting past history backwards. The originally proposed version involves a fairly inefficient exact computation of parameter gradients, while a more efficient recent approach approximates the forward gradient instead (paper).

Some other talks I really liked but ran out of steam to write about: Joelle Pineau’s intro to reinforcement learning, Pieter Abbeel on deep reinforcement learning, Shakir Mohamed on deep generative models, Surya Ganguli on neuroscience and deep learning.

Clopen AI: Openness in different aspects of AI development

1-clopen-setThere has been a lot of discussion about the appropriate level of openness in AI research in the past year – the OpenAI announcement, the blog post Should AI Be Open?, a response to the latter, and Nick Bostrom’s thorough paper Strategic Implications of Openness in AI development.

There is disagreement on this question within the AI safety community as well as outside it. Many people are justifiably afraid of concentrating power to create AGI and determine its values in the hands of one company or organization. Many others are concerned about the information hazards of open-sourcing AGI and the resulting potential for misuse. In this post, I argue that some sort of compromise between openness and secrecy will be necessary, as both extremes of complete secrecy and complete openness seem really bad. The good news is that there isn’t a single axis of openness vs secrecy – we can make separate judgment calls for different aspects of AGI development, and develop a set of guidelines.

Information about AI development can be roughly divided into two categories – technical and strategic. Technical information includes research papers, data, source code (for the algorithm, objective function), etc. Strategic information includes goals, forecasts and timelines, the composition of ethics boards, etc. Bostrom argues that openness about strategic information is likely beneficial both in terms of short- and long-term impact, while openness about technical information is good on the short-term, but can be bad on the long-term due to increasing the race condition. We need to further consider the tradeoffs of releasing different kinds of technical information.

Sharing papers and data is both more essential for the research process and less potentially dangerous than sharing code, since it is hard to reconstruct the code from that information alone. For example, it can be difficult to reproduce the results of a neural network algorithm based on the research paper, given the difficulty of tuning the hyperparameters and differences between computational architectures.

Releasing all the code required to run an AGI into the world, especially before it’s been extensively debugged, tested, and safeguarded against bad actors, would be extremely unsafe. Anyone with enough computational power could run the code, and it would be difficult to shut down the program or prevent it from copying itself all over the Internet.

However, releasing none of the source code is also a bad idea. It would currently be impractical, given the strong incentives for AI researchers to share at least part of the code for recognition and replicability. It would also be suboptimal, since sharing some parts of the code is likely to contribute to safety. For example, it would make sense to open-source the objective function code without the optimization code, which would reveal what is being optimized for but not how. This could make it possible to verify whether the objective is sufficiently representative of society’s values – the part of the system that would be the most understandable and important to the public anyway.

It is rather difficult to verify to what extent a company or organization is sharing their technical information on AI development, and enforce either complete openness or secrecy. There is not much downside to specifying guidelines for what is expected to be shared and what isn’t. Developing a joint set of openness guidelines on the short and long term would be a worthwhile endeavor for the leading AI companies today.

(Cross-posted to the FLI blog and Approximately Correct. Thanks to Jelena Luketina and Janos Kramar for their detailed feedback on this post!)

New AI safety research agenda from Google Brain

Google Brain just released an inspiring research agenda, Concrete Problems in AI Safety, co-authored by researchers from OpenAI, Berkeley and Stanford. This document is a milestone in setting concrete research objectives for keeping reinforcement learning agents and other AI systems robust and beneficial. The problems studied are relevant both to near-term and long-term AI safety, from cleaning robots to higher-stakes applications. The paper takes an empirical focus on avoiding accidents as modern machine learning systems become more and more autonomous and powerful.

Reinforcement learning is currently the most promising framework for building artificial agents – it is thus especially important to develop safety guidelines for this subfield of AI. The research agenda describes a comprehensive (though likely non-exhaustive) set of safety problems, corresponding to where things can go wrong when building AI systems:

  • Mis-specification of the objective function by the human designer. Two common pitfalls when designing objective functions are negative side-effects and reward hacking (also known as wireheading), which are likely to happen by default unless we figure out how to guard against them. One of the key challenges is specifying what it means for an agent to have a low impact on the environment while achieving its objectives effectively.

  • Extrapolation from limited information about the objective function. Even with a correct objective function, human supervision is likely to be costly, which calls for scalable oversight of the artificial agent.

  • Extrapolation from limited training data or using an inadequate model. We need to develop safe exploration strategies that avoid irreversibly bad outcomes, and build models that are robust to distributional shift – able to fail gracefully in situations that are far outside the training data distribution.

The AI research community is increasingly focusing on AI safety in recent years, and Google Brain’s agenda is part of this trend. It follows on the heels of the Safely Interruptible Agents paper from Google DeepMind and the Future of Humanity Institute, which investigates how to avoid unintended consequences from interrupting or shutting down reinforcement learning agents. We at FLI are super excited that industry research labs at Google and OpenAI are spearheading and fostering collaboration on AI safety research, and look forward to the outcomes of this work.

(Cross-posted from the FLI blog.)

Using humility to counteract shame

u0sm9wx“Pride is not the opposite of shame, but its source. True humility is the only antidote to shame.”

Uncle Iroh, “Avatar: The Last Airbender”

 

Shame is one of the trickiest emotions to deal with. It is difficult to think about, not to mention discuss with others, and gives rise to insidious ugh fields and negative spirals. Shame often underlies other negative emotions without making itself apparent – anxiety or anger at yourself can be caused by unacknowledged shame about the possibility of failure. It can stack on top of other emotions – e.g. you start out feeling upset with someone, and end up being ashamed of yourself for feeling upset, and maybe even ashamed of feeling ashamed if meta-shame is your cup of tea. The most useful approach I have found against shame is invoking humility.

What is humility, anyway? It is often defined as a low view of your own importance, and tends to be conflated with modesty. Another common definition that I find more useful is acceptance of your own flaws and shortcomings. This is more compatible with confidence, and helpful irrespective of your level of importance or comparison to other people. What humility feels like to me on a system 1 level is a sense of compassion and warmth towards yourself while fully aware of your imperfections (while focusing on imperfections without compassion can lead to beating yourself up). According to LessWrong, “to be humble is to take specific actions in anticipation of your own errors”, which seems more like a possible consequence of being humble than a definition.

Humility is a powerful tool for psychological well-being and instrumental rationality that is more broadly applicable than just the ability to anticipate errors by seeing your limitations more clearly. I can summon humility when I feel anxious about too many upcoming deadlines, or angry at myself for being stuck on a rock climbing route, or embarrassed about forgetting some basic fact in my field that I am surely expected to know by the 5th year of grad school.

While humility comes naturally to some people, others might find it useful to explicitly build an identity as a humble person. How can you invoke this mindset? One way is through negative visualization or pre-hindsight, considering how your plans could fail, which can be time-consuming and usually requires system 2. A faster and less effortful way is to is to imagine a person, real or fictional, who you consider to be humble. I often bring to mind my grandfather, or Uncle Iroh from the Avatar series, sometimes literally repeating the above quote in my head, sort of like an affirmation. I don’t actually agree that humility is the only antidote to shame, but it does seem to be one of the most effective.

(Cross-posted to LessWrong. Thanks to Janos Kramar for his feedback on this post.)