Category Archives: machine learning

Portfolio approach to AI safety research

dimensionsLong-term AI safety is an inherently speculative research area, aiming to ensure safety of advanced future systems despite uncertainty about their design or algorithms or objectives. It thus seems particularly important to have different research teams tackle the problems from different perspectives and under different assumptions. While some fraction of the research might not end up being useful, a portfolio approach makes it more likely that at least some of us will be right.

In this post, I look at some dimensions along which assumptions differ, and identify some underexplored reasonable assumptions that might be relevant for prioritizing safety research. (In the interest of making this breakdown as comprehensive and useful as possible, please let me know if I got something wrong or missed anything important.)

Assumptions about similarity between current and future AI systems

If a future general AI system has a similar algorithm to a present-day system, then there are likely to be some safety problems in common (though more severe in generally capable systems). Insights and solutions for those problems are likely to transfer to some degree from current systems to future ones. For example, if a general AI system is based on reinforcement learning, we can expect it to game its reward function in even more clever and unexpected ways than present-day reinforcement learning agents do. Those who hold the similarity assumption often expect most of the remaining breakthroughs on the path to general AI to be compositional rather than completely novel, enhancing and combining existing components in novel and better-implemented ways (many current machine learning advances such as AlphaGo are an example of this).

Note that assuming similarity between current and future systems is not exactly the same as assuming that studying current systems is relevant to ensuring the safety of future systems, since we might still learn generalizable things by testing safety properties of current systems even if they are different from future systems.

Assuming similarity suggests a focus on empirical research based on testing the safety properties of current systems, while not making this assumption encourages more focus on theoretical research based on deriving safety properties from first principles, or on figuring out what kinds of alternative designs would lead to safe systems. For example, safety researchers in industry tend to assume more similarity between current and future systems than researchers at MIRI.

Here is my tentative impression of where different safety research groups are on this axis. This is a very approximate summary, since views often vary quite a bit within the same research group (e.g. FHI is particularly diverse in this regard).similarity_axis
On the high-similarity side of the axis, we can explore the safety properties of different architectural / algorithmic approaches to AI, e.g. on-policy vs off-policy or model-free vs model-based reinforcement learning algorithms. It might be good to have someone working on safety issues for less commonly used agent algorithms, e.g. evolution strategies.

Assumptions about promising approaches to safety problems

Level of abstraction. What level of abstraction is most appropriate for tackling a particular problem. For example, approaches to the value learning problem range from explicitly specifying ethical constraints to capability amplification and indirect normativity, with cooperative inverse reinforcement learning somewhere in between. These assumptions could be combined by applying different levels of abstraction to different parts of the problem. For example, it might make sense to explicitly specify some human preferences that seem obvious and stable over time (e.g. “breathable air”), and use the more abstract approaches to impart the most controversial, unstable and vague concepts (e.g. “fairness” or “harm”). Overlap between the more and less abstract specifications can create helpful redundancy (e.g. air pollution as a form of harm + a direct specification of breathable air).

For many other safety problems, the abstraction axis is not as widely explored as for value learning. For example, most of the approaches to avoiding negative side effects proposed in Concrete Problems (e.g. impact regularizers and empowerment) are on a medium level of abstraction, while it also seems important to address the problem on a more abstract level by formalizing what we mean by side effects (which would help figure out what we should actually be regularizing, etc). On the other hand, almost all current approaches to wireheading / reward hacking are quite abstract, and the problem would benefit from more empirical work.

Explicit specification vs learning from data. Whether a safety problem is better addressed by directly defining a concept (e.g. the Low Impact AI paper formalizes the impact of an AI system by breaking down the world into ~20 billion variables) or learning the concept from human feedback (e.g. Deep Reinforcement Learning from Human Preferences paper teaches complex objectives to AI systems that are difficult to specify directly, like doing a backflip). I think it’s important to address safety problems from both of these angles, since the direct approach is unlikely to work on its own, but can give some idea of the idealized form of the objective that we are trying to approximate by learning from data.

Modularity of AI design. What level of modularity makes it easier to ensure safety? Ranges from end-to-end systems to ones composed of many separately trained parts that are responsible for specific abilities and tasks. Safety approaches for the modular case can limit the capabilities of individual parts of the system, and use some parts to enforce checks and balances on other parts. MIRI’s foundations approach focuses on a unified agent, while the safety properties on the high-modularity side has mostly been explored by Eric Drexler (more recent work is not public but available upon request). It would be good to see more people work on the high-modularity assumption.

Takeaways

To summarize, here are some relatively neglected assumptions:

  • Medium similarity in algorithms / architectures
  • Less popular agent algorithms
  • Modular general AI systems
  • More / less abstract approaches to different safety problems (more for side effects, less for wireheading, etc)
  • More direct / data-based approaches to different safety problems

From a portfolio approach perspective, a particular research avenue is worthwhile if it helps to cover the space of possible reasonable assumptions. For example, while MIRI’s research is somewhat controversial, it relies on a unique combination of assumptions that other groups are not exploring, and is thus quite useful in terms of covering the space of possible assumptions.

I think the FLI grant program contributed to diversifying the safety research portfolio by encouraging researchers with different backgrounds to enter the field. It would be good for grantmakers in AI safety to continue to optimize for this in the future (e.g. one interesting idea is using a lottery after filtering for quality of proposals).

When working on AI safety, we need to hedge our bets and look out for unknown unknowns – it’s too important to put all the eggs in one basket.

(Cross-posted to the FLI blog and Approximately Correct. Thanks to Janos Kramar, Jan Leike and Shahar Avin for their feedback on this post. Thanks to Jaan Tallinn and others for inspiring discussions.)

Advertisements

Highlights from the ICLR conference: food, ships, and ML security

It’s been an eventful few days at ICLR in the coastal town of Toulon in Southern France, after a pleasant train ride from London with a stopover in Paris for some sightseeing. There was more food than is usually provided at conferences, and I ended up almost entirely subsisting on tasty appetizers. The parties were memorable this year, including one in a vineyard and one in a naval museum. The overall theme of the conference setting could be summarized as “finger food and ships”.

food-and-ships

There were a lot of interesting papers this year, especially on machine learning security, which will be the focus on this post. (Here is a great overview of the topic.)

On the attack side, adversarial perturbations now work in physical form (if you print out the image and then take a picture) and they can also interfere with image segmentation. This has some disturbing implications for fooling vision systems in self-driving cars, such as impeding them from recognizing pedestrians. Adversarial examples are also effective at sabotaging neural network policies in reinforcement learning at test time.

adv-ex-policy.png

In more encouraging news, adversarial examples are not entirely transferable between different models. For targeted examples, which aim to be misclassified as a specific class, the target class is not preserved when transferring to a different model. For example, if an image of a school bus is classified as a crocodile by the original model, it has at most 4% probability of being seen as a crocodile by another model. The paper introduces an ensemble method for developing adversarial examples whose targets do transfer, but this seems to only work well if the ensemble includes a model with a similar architecture to the new model.

On the defense side, there were some new methods for detecting adversarial examples. One method augments neural nets with a detector subnetwork, which works quite well and generalizes to new adversaries (if they are similar to or weaker than the adversary used for training). Another approach analyzes adversarial images using PCA, and finds that they are similar to normal images in the first few thousand principal components, but have a lot more variance in later components. Note that the reverse is not the case – adding arbitrary variation in trailing components does not necessarily encourage misclassification.

There has also been progress in scaling adversarial training to larger models and data sets, which also found that higher-capacity models are more resistant against adversarial examples than lower-capacity models. My overall impression is that adversarial attacks are still ahead of adversarial defense, but the defense side is starting to catch up.

20170426_202937.jpg

(Cross-posted to the FLI blog and Approximately Correct. Thanks to Janos Kramar for his feedback on this post.)

2016-17 New Year review

2016 progress

Research / career:

  • Got a job at DeepMind as a research scientist in AI safety.
  • Presented MiniSPN paper at ICLR workshop.
  • Finished RNN interpretability paper and presented at ICML and NIPS workshops.
  • Attended the Deep Learning Summer School.
  • Finished and defended PhD thesis.
  • Moved to London and started working at DeepMind.

FLI:

  • Talk and panel (moderator) at Effective Altruism Global X Boston
  • Talk and panel at the Governance of Emerging Technologies conference at ASU
  • Talk and panel at Brain Bar Budapest
  • AI safety session at OpenAI unconference
  • Talk and panel at Effective Altruism Global X Oxford
  • Talk and panel at Cambridge Catastrophic Risk Conference run by CSER

Rationality / effectiveness:

  • Went to a 5-day Zentensive meditation retreat with Janos, in between grad school and moving to London. This was very helpful for practicing connecting with my direct emotional experience, and a good way to reset during a life transition.
  • Stopped using 42goals (too glitchy) and started recording data in a Google form emailed to myself daily. Now I am actually entering accurate data every day instead of doing it retroactively whenever I remember. I tried a number of goal tracking apps, but all of them seemed too inflexible (I was surprised not to find anything that provides correlation charts between different goals, e.g. meditation vs. hours of sleep).

Random cool things:

  • Hiked in the Andes to an altitude of 17,000 feet.
  • Visited the Grand Canyon.
  • New countries visited: UK, Bolivia, Spain.
  • Started a group house in London (moving there in a few weeks).
  • Started contributing to the new blog Approximately Correct on societal impacts of machine learning.

IMG_1617.JPG

2016 prediction outcomes

Resolutions:

  1. Finish PhD thesis (70%) – done
  2. Write at least 12 blog posts (40%) – 9
  3. Meditate at least 200 days (50%) – 245
  4. Exercise at least 200 days (50%) – 282
  5. Do at least 5 pullups in a row (40%) – still only 2-3
  6. Record at least 50 new thoughts (50%) – 29
  7. Stay up past 1:30am at most 20% of the nights (40%) – 26.8%
  8. Do at least 10 pomodoros per week on average (50%) – 13

Predictions:

  1. At least one paper accepted for publication (70%) – two papers accepted to workshops
  2. I will get at least one fellowship (40%)
  3. Insomnia at most 20% of nights (20%) – 18.3%
  4. FLI will co-organize at least 3 AI safety workshops (50%) – AAAI, ICML, NIPS

Calibration:

  • Low predictions (20-40%): 1/5 = 20% (overconfident)
  • Medium predictions (50-70%): 6/7 = 85% (underconfident)
  • It’s interesting that my 40% predictions were all wrong, and my 50% predictions were almost all correct. I seem to be translating system 1 labels of ‘not that likely’ and ‘reasonably likely’ to 40% and 50% respectively, while they should translate to something more like 25% and 70%. After the overconfident predictions last year, I tried to tone down the predictions for this year, but the lower ones didn’t get toned down enough.
  • I seem to be more accurate on predictions than resolutions, probably due to wishful thinking. Experimenting with no resolutions for next year.

2017 predictions

  1. Our AI safety team will have at least two papers accepted for publication at a major conference, not counting workshops (70%).
  2. I will write at least 9 blog posts (50%).
  3. I will meditate at least 250 days (45%).
  4. I will exercise at least 250 days (55%).
  5. I will visit at least 2 new countries (80%).
  6. I will attend Burning Man (85%).

AI Safety Highlights from NIPS 2016

This year’s Neural Information Processing Systems conference was larger than ever, with almost 6000 people attending, hosted in a huge convention center in Barcelona, Spain. The conference started off with two exciting announcements on open-sourcing collections of environments for training and testing general AI capabilities – the DeepMind Lab and the OpenAI Universe. Among other things, this is promising for testing safety properties of ML algorithms. OpenAI has already used their Universe environment to give an entertaining and instructive demonstration of reward hacking that illustrates the challenge of designing robust reward functions.

I was happy to see a lot of AI-safety-related content at NIPS this year. The ML and the Law symposium and Interpretable ML for Complex Systems workshop focused on near-term AI safety issues, while the Reliable ML in the Wild workshop also covered long-term problems. Here are some papers relevant to long-term AI safety:

Inverse Reinforcement Learning

Cooperative Inverse Reinforcement Learning (CIRL) by Hadfield-Menell, Russell, Abbeel, and Dragan (main conference). This paper addresses the value alignment problem by teaching the artificial agent about the human’s reward function, using instructive demonstrations rather than optimal demonstrations like in classical IRL (e.g. showing the robot how to make coffee vs having it observe coffee being made). (3-minute video)

cirl

ssrlGeneralizing Skills with Semi-Supervised Reinforcement Learning by Finn, Yu, Fu, Abbeel, and Levine (Deep RL workshop). This work addresses the scalable oversight problem by proposing the first tractable algorithm for semi-supervised RL. This allows artificial agents to robustly learn reward functions from limited human feedback. The algorithm uses an IRL-like approach to infer the reward function, using the agent’s own prior experiences in the supervised setting as an expert demonstration.

interactive-irlTowards Interactive Inverse Reinforcement Learning by Armstrong and Leike (Reliable ML workshop). This paper studies the incentives of an agent that is trying to learn about the reward function while simultaneously maximizing the reward. The authors discuss some ways to reduce the agent’s incentive to manipulate the reward learning process.off-switch

Should Robots Have Off Switches? by Milli, Hadfield-Menell, and Russell (Reliable ML workshop). This poster examines some adverse effects of incentivizing artificial agents to be compliant in the off-switch game (a variant of CIRL).

Safe exploration

safemdpSafe Exploration in Finite Markov Decision Processes with Gaussian Processes by Turchetta, Berkenkamp, and Krause (main conference). This paper develops a reinforcement learning algorithm called Safe MDP that can explore an unknown environment without getting into irreversible situations, unlike classical RL approaches.intrinsic_fear

Combating Reinforcement Learning’s Sisyphean Curse with Intrinsic Fear by Lipton, Gao, Li, Chen, and Deng (Reliable ML workshop). This work addresses the ‘Sisyphean curse’ of DQN algorithms forgetting past experiences, as they become increasingly unlikely under a new policy, and therefore eventually repeating catastrophic mistakes. The paper introduces an approach called ‘intrinsic fear’, which maintains a model for how likely different states are to lead to a catastrophe within some number of steps.

~~~~~

Most of these papers were related to inverse reinforcement learning – while IRL is a promising approach, it would be great to see more varied safety material at the next NIPS (fingers crossed for some innovative contributions from Rocket AI!). There were some more safety papers on other topics at UAI this summer: Safely Interruptible Agents (formalizing what it means to incentivize an agent to obey shutdown signals) and A Formal Solution to the Grain of Truth Problem (providing a broad theoretical framework for multiple agents learning to predict each other in arbitrary computable games).

(Cross-posted to Approximately Correct and the FLI blog. Thanks to Jan Leike, Zachary Lipton, and Janos Kramar for providing feedback on this post.)

OpenAI unconference on machine learning

Last weekend, I attended OpenAI’s self-organizing conference on machine learning (SOCML 2016), meta-organized by Ian Goodfellow (thanks Ian!). It was held at OpenAI’s new office, with several floors of large open spaces. The unconference format was intended to encourage people to present current ideas alongside with completed work. The schedule mostly consisted of 2-hour blocks with broad topics like “reinforcement learning” and “generative models”, guided by volunteer moderators. I especially enjoyed the sessions on neuroscience and AI and transfer learning, which had smaller and more manageable groups than the crowded popular sessions, and diligent moderators who wrote down the important points on the whiteboard. Overall, I had more interesting conversation but also more auditory overload at SOCML than at other conferences.

To my excitement, there was a block for AI safety along with the other topics. The safety session became a broad introductory Q&A, moderated by Nate Soares, Jelena Luketina and me. Some topics that came up: value alignment, interpretability, adversarial examples, weaponization of AI.

image_1-19ca2ee3d92cf2f6ec91c4a95cd7ae8a48a14bc52f25fb73eb68ac623043477d

AI safety discussion group (image courtesy of Been Kim)

One value alignment question was how to incorporate a diverse set of values that represents all of humanity in the AI’s objective function. We pointed out that there are two complementary problems: 1) getting the AI’s values to be in the small part of values-space that’s human-compatible, and 2) averaging over that space in a representative way. People generally focus on the ways in which human values differ from each other, which leads them to underestimate the difficulty of the first problem and overestimate the difficulty of the second. We also agreed on the importance of allowing for moral progress by not locking in the values of AI systems.

Nate mentioned some alternatives to goal-optimizing agents – quantilizers and approval-directed agents. We also discussed the limitations of using blacklisting/whitelisting in the AI’s objective function: blacklisting is vulnerable to unforeseen shortcuts and usually doesn’t work from a security perspective, and whitelisting hampers the system’s ability to come up with creative solutions (e.g. the controversial move 37 by AlphaGo in the second game against Sedol).

Been Kim brought up the recent EU regulation on the right to explanation for algorithmic decisions. This seems easy to game due to lack of good metrics for explanations. One proposed metric was that a human would be able to predict future model outputs from the explanation. This might fail for better-than-human systems by penalizing creative solutions if applied globally, but seems promising as a local heuristic.

Ian Goodfellow mentioned the difficulties posed by adversarial examples: an imperceptible adversarial perturbation to an image can make a convolutional network misclassify it with very high confidence. There might be some kind of No Free Lunch theorem where making a system more resistant to adversarial examples would trade off with performance on non-adversarial data.

We also talked about dual-use AI technologies, e.g. advances in deep reinforcement learning for robotics that could end up being used for military purposes. It was unclear whether corporations or governments are more trustworthy with using these technologies ethically: corporations have a profit motive, while governments are more likely to weaponize the technology.

unconference_board.jpg

More detailed notes by Janos coming soon! For a detailed overview of technical AI safety research areas, I highly recommend reading Concrete Problems in AI Safety.

Cross-posted to the FLI blog.

Looking back at my grad school journey

I recently defended my PhD thesis, and a chapter of my life has now come to an end. It feels both exciting and a bit disorienting to be done with this phase of much stress and growth. My past self who started this five years ago, with a very vague idea of what she was getting into, was a rather different person from my current self.

I have developed various skills over these five years, both professionally and otherwise. I learned to read papers and explain them to others, to work on problems that take months rather than hours and be content with small bits of progress. I used to believe that I should be interested in everything, and gradually gave myself permission not to care about most topics to be able to focus on things that are actually interesting to me, developing some sense of discernment. In 2012 I was afraid to comment on the LessWrong forum because I might say something stupid and get downvoted – in 2013 I wrote my first post, and in 2014 I started this blog. I went through the Toastmasters program and learned to speak in front of groups, though I still feel nervous when speaking on technical topics, especially about my own work. I co-founded a group house and a nonprofit, both of which are still flourishing. I learned how to run events and lead organizations, starting with LessWrong meetups and the Harvard Toastmasters club, which were later displaced by running FLI.

I remember agonizing over whether I should do a PhD or not, and I wish I had instead spent more time deciding where and how to do it. I applied to a few statistics departments in the Boston area and joined the same department that Janos was in, without seriously considering computer science, even though my only research experience back in undergrad was in that field. The statistics department was full of interesting courses and brilliant people that taught me a great deal, but the cultural fit wasn’t quite right and I felt a bit out of place there. I eventually found my way to the computer science department at the end of my fourth year, but I wish I had started out there to begin with.

My research work took a rather meandering path that somehow came together in the end. My first project was part of the astrostatistics seminar, which I was not particularly motivated about, but I expected myself to be interested in everything. I never quite understood what people were talking about in the seminar or what I was supposed to be doing, and quietly dropped the project at the end of my first year when leaving for my quantitative analyst internship at D.E.Shaw. The internship was my first experience in industry, where I learned factor analysis and statistical coding in Python (the final review from my manager boiled down to “great coder, research skills need work”). In second year, my advisor offered me a project that was unfinished by his previous students, which would take a few months to polish up. The project was on a new method for classification and variable selection called SBFC. I dug up a bunch of issues with the existing model and code, from runtime performance to MCMC detailed balance, and ended up stuck on the project for 3 years. During that time, I dabbled with another project that sort of petered out, did a Google internship on modeling ad quality, and sank a ton of time into FLI. In the middle of fourth year, SBFC was still my only project, and things were not looking great for graduating.

This was when I realized that the part of statistics that was interesting to me was the overlap with computer science and AI, a.k.a. machine learning. I went to the NIPS conference for the first time, and met a lot of AI researchers – I didn’t understand a lot of their work, but I liked the way they thought. I co-organized FLI’s Puerto Rico conference and met more AI people there. I finally ventured outside the stats department and started sitting in on ML lab meetings at the CS department, which mostly consisted of research updates on variational autoencoders that went right over my head. I studied a lot to fill the gaps in my ML knowledge that were not covered by my statistics background, namely neural networks and reinforcement learning (still need to read Sutton & Barto…). To my surprise, many people at the ML lab were also transplants from other departments, officially doing PhDs in math or physics.

That summer I did my second internship at Google, on sum-product network models (SPNs) for anomaly detection in the Knowledge Graph. I wondered if it would result in a paper that could be shoehorned into my thesis, and whether I could find a common thread between SPNs, SBFC and my upcoming project at the ML lab. This unifying theme turned out to be interpretability – the main selling point of SBFC, an advantage of SPNs over other similarly expressive models, and one of my CS advisor’s interests. Working on interpretability was a way to bring more of the statistical perspective into machine learning, and seemed relevant to AI safety as well. With this newfound sense of direction, in a new environment, my fifth year had as much research output as the previous three put together, and I presented two workshop posters in 2016 – on SPNs at ICLR, and on RNN interpretability at ICML.

Volunteering for FLI during grad school started out as a kind of double life, and ended up interacting with my career in interesting ways. For a while I didn’t tell anyone in my department that I co-founded a nonprofit trying to save the world from existential risk, which was often taking up more of my time than research. However, FLI’s outreach work on AI safety was also beneficial to me – as one of the AI experts on the FLI core team, I met a lot of AI researchers who I may not have connected with otherwise. When I met the DeepMind founders at the Puerto Rico conference, I would not have predicted that I’ll be interviewing for their AI safety team a year later. The two streams of my interests, ML and AI safety, have finally crossed, and the double life is no more.

What lessons have I drawn from the grad school experience, and what advice could I give to others?

  • Going to conferences and socializing with other researchers was super useful and fun. I highly recommend attending NIPS and ICML even if you’re not presenting.
  • Academic departments vary widely in their requirements. For example, the statistics department expected PhD students to teach 10 sections (I got away with doing 5 sections and it was still a lot of work), while the CS department only expected 1-2 sections.
  • Internships were a great source of research experience and funding (a better use of time than teaching, in my opinion). It’s worth spending a summer interning at a good company, even if you are definitely going into academia.
  • Contrary to common experience, writer’s block was not an obstacle for me. My actual bottleneck was coding, debugging and running experiments, which was often tedious and took over half of my research time, so it’s well worth optimizing those aspects of the work.
  • The way FLI ended up contributing to my career path reminds me of a story about Steve Jobs sitting in on a calligraphy class that later turned out to be super relevant to creating snazzy fonts for Apple computers. I would recommend making time for seemingly orthogonal activities during grad school that you’re passionate about, both because they provide a stimulating break from research, and because they could become unexpectedly useful later.

Doing a PhD was pretty stressful for me, but ultimately worthwhile. A huge thank you to everyone who guided and supported me through it!

Highlights from the Deep Learning Summer School

A few weeks ago, Janos and I attended the Deep Learning Summer School at the University of Montreal. Various well-known researchers covered topics related to deep learning, from reinforcement learning to computational neuroscience (see the list of speakers with slides and videos). Here are a few ideas that I found interesting in the talks (this list is far from exhaustive):

Cross-modal learning (Antonio Torralba)

You can do transfer learning in convolutional neural nets by freezing the parameters in some layers and retraining others on a different domain for the same task (paper). For example, if you have a neural net for scene recognition trained on real images of bedrooms, you could reuse the same architecture to recognize drawings of bedrooms. The last few layers represent abstractions like “bed” or “lamp”, which apply to drawings just as well as to real images, while the first few layers represent textures, which would differ between the two data modalities of real images and drawings. More generally, the last few layers are task-dependent and modality-independent, while the first few layers are the opposite.

cross-modal

Importance weighted autoencoders (Ruslan Salakhutdinov)

vaeThe variational autoencoder (VAE) is a popular generative model that constructs an autoencoder out of a generative network (encoder) and recognition network (decoder). It then trains these networks to optimize a variational approximation of the posterior distribution by maximizing a lower bound on the log likelihood. IWAE is a variation that tightens the variational lower bound by relaxing the assumptions about the form of the posterior distribution . While the VAE maximizes a lower bound based on a single sample from the recognition distribution, the IWAE lower bound uses a weighted average over several samples. Applying importance weighting over several samples avoids the failure mode where the VAE objective penalizes models that produce even a few samples through the recognition network that don’t fit the posterior from the generative network, and taking several samples allows for better approximation of the posterior and thus a tighter lower bound.(The IWAE paper also gives a more intuitive introduction to VAE than the original paper, in my opinion.)

Variations on RNNs (Yoshua Bengio)

hierarchical rnnThis talk mentioned a few recurrent neural network (RNN) models that were unfamiliar to me. Variational RNNs introduce some elements of variational autoencoders into RNNs by adding latent variables (z) into the top hidden layer (paper). The RNN internal structure is entirely deterministic besides the output probability model, so it can be helpful to inject a higher-level source of noise to model highly structured data (e.g. speech). This was further extended with multiresolution RNNs, which are variational and hierarchical (paper). Another interesting model is real-time recurrent learning, a more biologically plausible alternative to backpropagation through time, where gradients are computed in an online feedforward manner without revisiting past history backwards. The originally proposed version involves a fairly inefficient exact computation of parameter gradients, while a more efficient recent approach approximates the forward gradient instead (paper).

Some other talks I really liked but ran out of steam to write about: Joelle Pineau’s intro to reinforcement learning, Pieter Abbeel on deep reinforcement learning, Shakir Mohamed on deep generative models, Surya Ganguli on neuroscience and deep learning.