Clopen AI: Openness in different aspects of AI development

1-clopen-setThere has been a lot of discussion about the appropriate level of openness in AI research in the past year – the OpenAI announcement, the blog post Should AI Be Open?, a response to the latter, and Nick Bostrom’s thorough paper Strategic Implications of Openness in AI development.

There is disagreement on this question within the AI safety community as well as outside it. Many people are justifiably afraid of concentrating power to create AGI and determine its values in the hands of one company or organization. Many others are concerned about the information hazards of open-sourcing AGI and the resulting potential for misuse. In this post, I argue that some sort of compromise between openness and secrecy will be necessary, as both extremes of complete secrecy and complete openness seem really bad. The good news is that there isn’t a single axis of openness vs secrecy – we can make separate judgment calls for different aspects of AGI development, and develop a set of guidelines.

Continue reading

Advertisements

New AI safety research agenda from Google Brain

Google Brain just released an inspiring research agenda, Concrete Problems in AI Safety, co-authored by researchers from OpenAI, Berkeley and Stanford. This document is a milestone in setting concrete research objectives for keeping reinforcement learning agents and other AI systems robust and beneficial. The problems studied are relevant both to near-term and long-term AI safety, from cleaning robots to higher-stakes applications. The paper takes an empirical focus on avoiding accidents as modern machine learning systems become more and more autonomous and powerful.

Reinforcement learning is currently the most promising framework for building artificial agents – it is thus especially important to develop safety guidelines for this subfield of AI. The research agenda describes a comprehensive (though likely non-exhaustive) set of safety problems, corresponding to where things can go wrong when building AI systems:

Continue reading

Using humility to counteract shame

u0sm9wx“Pride is not the opposite of shame, but its source. True humility is the only antidote to shame.”

Uncle Iroh, “Avatar: The Last Airbender”

 

Shame is one of the trickiest emotions to deal with. It is difficult to think about, not to mention discuss with others, and gives rise to insidious ugh fields and negative spirals. Shame often underlies other negative emotions without making itself apparent – anxiety or anger at yourself can be caused by unacknowledged shame about the possibility of failure. It can stack on top of other emotions – e.g. you start out feeling upset with someone, and end up being ashamed of yourself for feeling upset, and maybe even ashamed of feeling ashamed if meta-shame is your cup of tea. The most useful approach I have found against shame is invoking humility.

Continue reading

Introductory resources on AI safety research

[See AI Safety Resources for the most recent version of this list.]

Reading list to get up to speed on the main ideas in the field of long-term AI safety. The resources are selected for relevance and/or brevity, and the list is not meant to be comprehensive. [Updated on 19 October 2017.]

Motivation

For a popular audience:

Sutskever and Amodei, 2017. Wall Street Journal: Protecting Against AI’s Existential Threat

Cade Metz, 2017. New York Times: Teaching A.I. Systems to Behave Themselves

FLI. AI risk background and FAQ. At the bottom of the background page, there is a more extensive list of resources on AI safety.

Tim Urban, 2015. Wait But Why: The AI Revolution. An accessible introduction to AI risk forecasts and arguments (with cute hand-drawn diagrams, and a few corrections from Luke Muehlhauser).

OpenPhil, 2015. Potential risks from advanced artificial intelligence. An overview of AI risks and timelines, possible interventions, and current actors in this space.

Continue reading

To contribute to AI safety, consider doing AI research

Among those concerned about risks from advanced AI, I’ve encountered people who would be interested in a career in AI research, but are worried that doing so would speed up AI capability relative to safety. I think it is a mistake for AI safety proponents to avoid going into the field for this reason (better reasons include being well-positioned to do AI safety work, e.g. at MIRI or FHI). This mistake contributed to me choosing statistics rather than computer science for my PhD, which I have some regrets about, though luckily there is enough overlap between the two fields that I can work on machine learning anyway.

I think the value of having more AI experts who are worried about AI safety is far higher than the downside of adding a few drops to the ocean of people trying to advance AI. Here are several reasons for this:

  1. Concerned researchers can inform and influence their colleagues, especially if they are outspoken about their views.
  2. Studying and working on AI brings understanding of the current challenges and breakthroughs in the field, which can usefully inform AI safety work (e.g. wireheading in reinforcement learning agents).
  3. Opportunities to work on AI safety are beginning to spring up within academia and industry, e.g. through FLI grants. In the next few years, it will be possible to do an AI-safety-focused PhD or postdoc in computer science, which would hit two birds with one stone.

Continue reading

2015-16 New Year review

2015 progress

Research:

  • Finished paper on the Selective Bayesian Forest Classifier algorithm
  • Made an R package for SBFC (beta)
  • Worked at Google on unsupervised learning for the Knowledge Graph with Moshe Looks during the summer (paper)
  • Joined the HIPS research group at Harvard CS and started working with the awesome Finale Doshi-Velez
  • Ratio of coding time to writing time was too high overall

FLI:

  • Co-organized two meetings to brainstorm biotechnology risks
  • Co-organized two Machine Learning Safety meetings
  • Gave a talk at the Shaping Humanity’s Trajectory workshop at EA Global
  • Helped organize NIPS symposium on societal impacts of AI

Rationality / effectiveness:

  • Extensive use of FollowUpThen for sending reminders to future selves
  • Mapped out my personal bottlenecks
  • Sleep:
    • Tracked insomnia (26% of nights) and sleep time (average 1:30am, stayed up past 1am on 31% of nights)
    • Started working on sleep hygiene
    • Stopped using melatonin (found it ineffective)

Continue reading

Highlights and impressions from NIPS conference on machine learning

This year’s NIPS was an epicenter of the current enthusiasm about AI and deep learning – there was a visceral sense of how quickly the field of machine learning is progressing, and two new AI startups were announced. Attendance has almost doubled compared to the 2014 conference (I hope they make it multi-track next year), and several popular workshops were standing room only. Given that there were only 400 accepted papers and almost 4000 people attending, most people were there to learn and socialize. The conference was a socially intense experience that reminded me a bit of Burning Man – the overall sense of excitement, the high density of spontaneous interesting conversations, the number of parallel events at any given time, and of course the accumulating exhaustion.

Some interesting talks and posters

Sergey Levine’s robotics demo at the crowded Deep Reinforcement Learning workshop (we showed up half an hour early to claim spots on the floor). This was one of the talks that gave me a sense of fast progress in the field. The presentation started with videos from this summer’s DARPA robotics challenge, where the robots kept falling down while trying to walk or open a door. Levine proceeded to outline his recent work on guided policy search, alternating between trajectory optimization and supervised training of the neural network, and granularizing complex tasks. He showed demos of robots successfully performing various high-dexterity tasks, like opening a door, screwing on a bottle cap, or putting a coat hanger on a rack. Impressive!

Continue reading