About me


I am a research scientist at DeepMind working on AGI safety, investigating how advanced AI systems could acquire goals and how to ensure those goals are human-compatible.

I co-founded the Future of Life Institute, a non-profit organization working to mitigate technological risks to humanity and increase the chances of a positive future.

My PhD in statistics and machine learning at Harvard focused on building interpretable models.

(The views expressed on this website are my own and do not represent DeepMind, Google, or the Future of Life Institute.)

In my spare time, I enjoy yoga, dancing, aerial silks, rock climbing, hiking, and meditating.

Find me on TwitterGoogle Scholar, GitHub, LinkedIn, Alignment Forum.