Victoria Krakovna

Skip to content
  • Home
  • Research
  • Talks
  • CV
  • AI alignment resources
  • Blog

Talks

Paradigms of AI alignment: components and enablers (Cohere video, OxAI video, blog post).

Some high-level thoughts on the DeepMind alignment team’s strategy. SERI MATS seminar, Feb 2023.

The Inside View podcast on Paradigms of AI alignment, AGI ruin arguments, and the Sharp Left Turn threat model. January 2023.

DeepMind podcast episode 4: AI, Robot. August 2019.

Specification, robustness and assurance problems in AI safety. IJCAI AISafety workshop, August 2019.

Discussion on the machine learning approach to AI safety (blog post). Effective Altruism Global London, October 2018.

Research problems in AI safety (video). Oxford AI Society lecture, October 2018.

FLI podcast: Navigating AI Safety – From Malicious Use to Accidents. March 2018.

Interpretability for AI safety (video). Interpretable ML symposium, NeurIPS, December 2017.

Careers in technical AI safety. Effective Altruism Global London, November 2017.

AI Safety: what, why and how? AI and Society conference, October 2017.

Highlights from Asilomar workshop on beneficial AI (video). Beneficial AI conference, January 2017.

AI safety: past, present, and future (video). Cambridge Conference on Catastrophic Risk, December 2016.

The story of FLI. Effective Altruism Global X Oxford, November 2016.

AI risk: why and how? Governance of Emerging Technologies Conference, May 2016.

Introduction to global catastrophic risks. Effective Altruism Global X Boston, April 2016.

Share this:

  • Twitter
  • Facebook

Like this:

Like Loading...
Blog at WordPress.com.
  • Follow Following
    • Victoria Krakovna
    • Join 74 other followers
    • Already have a WordPress.com account? Log in now.
    • Victoria Krakovna
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
%d bloggers like this: