Takeaways from self-tracking data

I’ve been collecting data about myself on a daily basis for the past 3 years. Half a year ago, I switched from using 42goals (which I only remembered to fill out once every few days) to a Google form emailed to me daily (which I fill out consistently because I check email often). Now for the moment of truth – a correlation matrix!

The data consists of “mood variables” (anxiety, tiredness, and “zoneout” – how distracted / spacey I’m feeling), “action variables” (exercise and meditation) and sleep variables (hours of sleep, sleep start/end time, insomnia). There are 5 binary variables (meditation, exercise, evening/morning insomnia, headache) and the rest are ordinal or continuous. Almost all the variables have 6 months of data, except that I started tracking anxiety 5 months ago and zoneout 2 months ago.

Continue reading

Advertisements

Highlights from the ICLR conference: food, ships, and ML security

It’s been an eventful few days at ICLR in the coastal town of Toulon in Southern France, after a pleasant train ride from London with a stopover in Paris for some sightseeing. There was more food than is usually provided at conferences, and I ended up almost entirely subsisting on tasty appetizers. The parties were memorable this year, including one in a vineyard and one in a naval museum. The overall theme of the conference setting could be summarized as “finger food and ships”.

food-and-ships

There were a lot of interesting papers this year, especially on machine learning security, which will be the focus on this post. (Here is a great overview of the topic.)

Continue reading

2016-17 New Year review

2016 progress

Research / career:

  • Got a job at DeepMind as a research scientist in AI safety.
  • Presented MiniSPN paper at ICLR workshop.
  • Finished RNN interpretability paper and presented at ICML and NIPS workshops.
  • Attended the Deep Learning Summer School.
  • Finished and defended PhD thesis.
  • Moved to London and started working at DeepMind.

FLI:

  • Talk and panel (moderator) at Effective Altruism Global X Boston
  • Talk and panel at the Governance of Emerging Technologies conference at ASU
  • Talk and panel at Brain Bar Budapest
  • AI safety session at OpenAI unconference
  • Talk and panel at Effective Altruism Global X Oxford
  • Talk and panel at Cambridge Catastrophic Risk Conference run by CSER

Continue reading

AI Safety Highlights from NIPS 2016

battlo

This year’s Neural Information Processing Systems conference was larger than ever, with almost 6000 people attending, hosted in a huge convention center in Barcelona, Spain. The conference started off with two exciting announcements on open-sourcing collections of environments for training and testing general AI capabilities – the DeepMind Lab and the OpenAI Universe. Among other things, this is promising for testing safety properties of ML algorithms. OpenAI has already used their Universe environment to give an entertaining and instructive demonstration of reward hacking that illustrates the challenge of designing robust reward functions.

I was happy to see a lot of AI-safety-related content at NIPS this year. The ML and the Law symposium and Interpretable ML for Complex Systems workshop focused on near-term AI safety issues, while the Reliable ML in the Wild workshop also covered long-term problems. Here are some papers relevant to long-term AI safety:

Continue reading

OpenAI unconference on machine learning

Last weekend, I attended OpenAI’s self-organizing conference on machine learning (SOCML 2016), meta-organized by Ian Goodfellow (thanks Ian!). It was held at OpenAI’s new office, with several floors of large open spaces. The unconference format was intended to encourage people to present current ideas alongside with completed work. The schedule mostly consisted of 2-hour blocks with broad topics like “reinforcement learning” and “generative models”, guided by volunteer moderators. I especially enjoyed the sessions on neuroscience and AI and transfer learning, which had smaller and more manageable groups than the crowded popular sessions, and diligent moderators who wrote down the important points on the whiteboard. Overall, I had more interesting conversation but also more auditory overload at SOCML than at other conferences.

To my excitement, there was a block for AI safety along with the other topics. The safety session became a broad introductory Q&A, moderated by Nate Soares, Jelena Luketina and me. Some topics that came up: value alignment, interpretability, adversarial examples, weaponization of AI.

image_1-19ca2ee3d92cf2f6ec91c4a95cd7ae8a48a14bc52f25fb73eb68ac623043477d

AI safety discussion group (image courtesy of Been Kim)

Continue reading

Looking back at my grad school journey

I recently defended my PhD thesis, and a chapter of my life has now come to an end. It feels both exciting and a bit disorienting to be done with this phase of much stress and growth. My past self who started this five years ago, with a very vague idea of what she was getting into, was a rather different person from my current self.

I have developed various skills over these five years, both professionally and otherwise. I learned to read papers and explain them to others, to work on problems that take months rather than hours and be content with small bits of progress. I used to believe that I should be interested in everything, and gradually gave myself permission not to care about most topics to be able to focus on things that are actually interesting to me, developing some sense of discernment. In 2012 I was afraid to comment on the LessWrong forum because I might say something stupid and get downvoted – in 2013 I wrote my first post, and in 2014 I started this blog. I went through the Toastmasters program and learned to speak in front of groups, though I still feel nervous when speaking on technical topics, especially about my own work. I co-founded a group house and a nonprofit, both of which are still flourishing. I learned how to run events and lead organizations, starting with LessWrong meetups and the Harvard Toastmasters club, which were later displaced by running FLI.

Continue reading

Highlights from the Deep Learning Summer School

montreal_university_5

A few weeks ago, Janos and I attended the Deep Learning Summer School at the University of Montreal. Various well-known researchers covered topics related to deep learning, from reinforcement learning to computational neuroscience (see the list of speakers with slides and videos). Here are a few ideas that I found interesting in the talks (this list is far from exhaustive):

Cross-modal learning (Antonio Torralba)

You can do transfer learning in convolutional neural nets by freezing the parameters in some layers and retraining others on a different domain for the same task (paper). For example, if you have a neural net for scene recognition trained on real images of bedrooms, you could reuse the same architecture to recognize drawings of bedrooms. The last few layers represent abstractions like “bed” or “lamp”, which apply to drawings just as well as to real images, while the first few layers represent textures, which would differ between the two data modalities of real images and drawings. More generally, the last few layers are task-dependent and modality-independent, while the first few layers are the opposite.

Continue reading