About me

headshot

I am a senior research scientist at DeepMind focusing on AI alignment: ensuring that advanced AI systems do what we want them to do and don’t knowingly act against our interests. I have worked on goal misgeneralization, specification gaming, reward tampering, and measuring side effects.

I co-founded the Future of Life Institute, a non-profit organization working to mitigate technological risks to humanity and increase the chances of a positive future.

My PhD in statistics and machine learning at Harvard focused on building interpretable models.

(The views expressed on this website are my own and do not represent DeepMind, Google, or the Future of Life Institute.)

In my spare time, I enjoy yoga, dancing, rock climbing, hiking, and meditating.

Find me on TwitterGoogle Scholar, GitHub, LinkedIn, Alignment Forum.