Category Archives: opinion

Retrospective on my posts on AI threat models

Last year, a major focus of my research was developing a better understanding of threat models for AI risk. This post is looking back at some posts on threat models I (co)wrote in 2022 (based on my reviews of these posts for the LessWrong 2022 review).

I ran a survey on DeepMind alignment team opinions on the list of arguments for AGI ruin. I expect the overall agreement distribution probably still holds for the current GDM alignment team (or may have shifted somewhat in the direction of disagreement), though I haven’t rerun the survey so I don’t really know. Looking back at the “possible implications for our work” section, we are working on basically all of these things. 

Thoughts on some of the cruxes in the post based on developments in 2023:

  • Is global cooperation sufficiently difficult that AGI would need to deploy new powerful technology to make it work? – There has been a lot of progress on AGI governance and broad endorsement of the risks this year, so I feel somewhat more optimistic about global cooperation than a year ago.
  • Will we know how capable our models are? – The field has made some progress on designing concrete capability evaluations – how well they measure the properties we are interested in remains to be seen.
  • Will systems acquire the capability to be useful for alignment / cooperation before or after the capability to perform advanced deception? – At least so far, deception and manipulation capabilities seem to be lagging a bit behind usefulness for alignment (e.g. model-written evals / critiques, weak-to-strong generalization), but this could change in the future. 
  • Is consequentialism a powerful attractor? How hard will it be to avoid arbitrarily consequentialist systems? – Current SOTA LLMs seem surprisingly non-consequentialist for their level of capability. I still expect LLMs to be one of the safest paths to AGI in terms of avoiding arbitrarily consequentialist systems. 

In Clarifying AI X-risk, we presented a categorization of threat models and our consensus threat model, which posits some combination of specification gaming and goal misgeneralization leading to misaligned power-seeking, or “SG+GMG→MAPS”. I still endorse this categorization of threat models and the consensus threat model. I often refer people to this post and use the “SG + GMG → MAPS” framing in my alignment overview talks. I remain uncertain about the likelihood of the deceptive alignment part of the threat model (in particular the requisite level of goal-directedness) arising in the LLM paradigm, relative to other mechanisms for AI risk. 

Source: Clarifying AI X-risk (Kenton et al, 2022)

In terms of adding new threat models to the categorization, the main one that comes to mind is Deep Deceptiveness, which I would summarize as “non-deceptiveness is anti-natural / hard to disentangle from general capabilities”. I would probably put this under “SG → MAPS”, assuming an irreducible kind of specification gaming where it’s very difficult (or impossible) to distinguish deceptiveness from non-deceptiveness (including through feedback on the model’s reasoning process). Though it could also be GMG where the “non-deceptiveness” concept is incoherent and thus very difficult to generalize well. 

Refining the Sharp Left Turn was an attempt to understand this threat model better (or at all) and make it a bit more concrete. I still endorse the breakdown of claims in this post.

The post could be improved by explicitly relating the claims to the “consensus” threat model summarized in Clarifying AI X-risk. Overall, SLT seems like a special case of this threat model, which makes a subset of the SLT claims: 

  • Claim 1 (capabilities generalize far) and Claim 3 (humans fail to intervene), but not Claims 1a/b (simultaneous / discontinuous generalization) or Claim 2 (alignment techniques stop working). 
  • It probably relies on some weaker version of Claim 2 (alignment techniques failing to apply to more powerful systems in some way) is needed for deceptive alignment to arise, e.g. if our interpretability techniques fail to detect deceptive reasoning. However, I expect that most ways this could happen would not be due to the alignment techniques being fundamentally inadequate for the capability transition to more powerful systems (the strong version of Claim 2 used in SLT).

When discussing AI risks, talk about capabilities, not intelligence

Public discussions about catastrophic risks from general AI systems are often derailed by using the word “intelligence”. People often have different definitions of intelligence, or associate it with concepts like consciousness that are not relevant to AI risks, or dismiss the risks because intelligence is not well-defined. I would advocate for using the term “capabilities” or “competence” instead of “intelligence” when discussing catastrophic risks from AI, because this is what the concerns are really about. For example, instead of “superintelligence” we can refer to “super-competence” or “superhuman capabilities”. 

Image source: TED talks

When we talk about general AI systems posing catastrophic risks, the concern is about losing control of highly capable AI systems. Definitions of general AI that are commonly used by people working to address these risks are about general capabilities of the AI systems: 

  • PASTA definition: “AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement”. 
  • Legg-Hutter definition: “An agent’s ability to achieve goals in a wide range of environments”.

We expect that AI systems that satisfy these definitions would have general capabilities including long-term planning, modeling the world, scientific research, manipulation, deception, etc. While these capabilities can be attained separately, we expect that their development is correlated, e.g. all of them likely increase with scale. 

There are various issues with the word “intelligence” that make it less suitable than “capabilities” for discussing risks from general AI systems:

  • Anthropomorphism: people often specifically associate “intelligence” with being human, being conscious, being alive, or having human-like emotions (none of which are relevant to or a prerequisite for risks posed by general AI systems). 
  • Associations with harmful beliefs and ideologies.
  • Moving goalposts: impressive achievements in AI are often dismissed as not indicating “true intelligence” or “real understanding” (e.g. see the “stochastic parrots” argument). Catastrophic risk concerns are based on what the AI system can do, not whether it has “real understanding” of language or the world.
  • Stronger associations with less risky capabilities: people are more likely to associate “intelligence” with being really good at math than being really good at politics, while the latter may be more representative of capabilities that make general AI systems pose a risk (e.g. manipulation and deception capabilities that could enable the system to overpower humans).
  • High level of abstraction: “intelligence” can take on the quality of a mythical ideal that can’t be met by an actual AI system, while “competence” is more conducive to being specific about the capability level in question.

It’s worth noting that I am not suggesting to always avoid the term “intelligence” when discussing advanced AI systems. Those who are trying to build advanced AI systems often want to capture different aspects of intelligence or endow the system with real understanding of the world, and it’s useful to investigate and discuss to what extent an AI system has (or could have) these properties. I am specifically advocating to avoid the term “intelligence” when discussing catastrophic risks, because AI systems can pose these risks without possessing real understanding or some particular aspects of intelligence. 

The basic argument for catastrophic risk from general AI has two parts: 1) the world is on track to develop generally capable AI systems in the next few decades, and 2) generally capable AI systems are likely to outcompete or overpower humans. Both of these arguments are easier to discuss and operationalize by referring to capabilities rather than intelligence: 

  • For #1, we can see a trend of increasingly general capabilities, e.g. from GPT-2 to GPT-4. Scaling laws for model performance as compute, data and model size increase suggest that this trend is likely to continue. Whether this trend reflects an increase in “intelligence” is an interesting question to investigate, but in the context of discussing risks, it can be a distraction from considering the implications of rapidly increasing capabilities of foundation models.
  • For #2, we can expect that more generally capable entities are likely to dominate over less generally capable ones. There are various historical examples of this, e.g. humans causing other species to go extinct. While there are various ways in which other animals may be more “intelligent” than humans, the deciding factor was that humans had more general capabilities like language and developing technology, which allowed them to control and shape the environment. The best threat models for catastrophic AI risk focus on how the general capabilities of advanced AI systems could allow them to overpower humans. 

As the capabilities of AI systems continue to advance, it’s important to be able to clearly consider their implications and possible risks. “Intelligence” is an ambiguous term with unhelpful connotations that often seems to derail these discussions. Next time you find yourself in a conversation about risks from general AI where people are talking past each other, consider replacing the word “intelligent” with “capable” – in my experience, this can make the discussion more clear, specific and productive.

(Thanks to Janos Kramar for helpful feedback on this post.)

Near-term motivation for AI alignment

AI alignment work is usually considered “longtermist”, which is about preserving humanity’s long-term potential. This was the primary motivation for this work when the alignment field got started around 20 years ago, and general AI seemed far away or impossible to most people in AI. However, given the current rate of progress towards advanced AI capabilities, there is an increasingly relevant near-term motivation to think about alignment, even if you mostly or only care about people alive today. This is most of my personal motivation for working on alignment.

I would not be surprised if general AI is reached in the next few decades, similarly to the latest AI expert survey‘s median of 2059 for human-level AI (as estimated by authors at top ML conferences) and the Metaculus median of 2039. The Precipice gives a 10% probability of human extinction this century due to AI, i.e. within the lifetime of children alive today (and I would expect most of this probability to be concentrated in the next few decades, i.e. within our lifetimes). I used to refer to AI alignment work as “long-term AI safety” but this term seems misleading now, since alignment would be more accurately described as “medium-term safety”. 

While AI alignment has historically been associated with longtermism, there is a downside of referring to longtermist arguments for alignment concerns. Sometimes people seem to conclude that they don’t need to worry about alignment if they don’t care much about the long-term future. For example, one commonly cited argument for trying to reduce existential risk from AI is that “even if it’s unlikely and far away, it’s so important that we should worry about it anyway”. People understandably interpret this as Pascal’s mugging and bounce off. This kind of argument for alignment concerns is not very relevant these days, because existential risk from AI is not that unlikely (10% this century is actually a lot, and may be a conservative estimate) and general AI is not that far away (an average of 36 years in the AI expert survey). 

Similarly, when considering specific paths to catastrophic risk from AI, a typical longtermist scenario involves an advanced AI system inventing molecular nanotechnology, which understandably sounds implausible to most people. I think a more likely path to catastrophic risk would involve general AI precipitating other catastrophic risks like pandemics (e.g. by doing biotechnology research) or taking over the global economy. If you’d like to learn about the most pertinent arguments for alignment concerns and plausible paths for AI to gain an advantage over humanity, check out Holden Karnofsky’s Most Important Century blog post series. 

In terms of my own motivation, honestly I don’t care that much about whether humanity gets to colonize the stars, reducing astronomical waste, or large numbers of future people existing. These outcomes would be very cool but optional in my view. Of course I would like humanity to have a good long-term future, but I mostly care about people alive today. My main motivation for working on alignment is that I would like my loved ones and everyone else on the planet to have a future. 

Sometimes people worry about a tradeoff between alignment concerns and other aspects of AI safety, such as ethics and fairness, but I still think this tradeoff is pretty weak. There are also many common interests between alignment and ethics that would be great for these communities to coordinate on. This includes developing industry-wide safety standards and AI governance mechanisms, setting up model evaluations for safety, and slow and cautious deployment of advanced AI systems. Ultimately all these safety problems need to be solved to ensure that general AI systems have a positive impact on the world. I think the distribution of effort between AI capabilities and safety will need to shift more towards safety as more advanced AI systems are developed. 

In conclusion, you don’t have to be a longtermist to care about AI alignment. I think the possible impacts on people alive today are significant enough to think about this problem, and the next decade is going to be a critical time for steering advanced AI technology towards safety. If you’d like to contribute to alignment research, here is a list of research agendas in this space and a good course to get up to speed on the fundamentals of AI alignment (more resources here).

Refining the Sharp Left Turn threat model

(Coauthored with others on the alignment team and cross-posted from the alignment forum: part 1, part 2)

A sharp left turn (SLT) is a possible rapid increase in AI system capabilities (such as planning and world modeling) that could result in alignment methods no longer working. This post aims to make the sharp left turn scenario more concrete. We will discuss our understanding of the claims made in this threat model, propose some mechanisms for how a sharp left turn could occur, how alignment techniques could manage a sharp left turn or fail to do so.

Image credit: Adobe

Claims of the threat model

What are the main claims of the “sharp left turn” threat model?

Claim 1. Capabilities will generalize far (i.e., to many domains)

There is an AI system that:

  • Performs well: it can accomplish impressive feats, or achieve high scores on valuable metrics.
  • Generalizes, i.e., performs well in new domains, which were not optimized for during training, with no domain-specific tuning.

Generalization is a key component of this threat model because we’re not going to directly train an AI system for the task of disempowering humanity, so for the system to be good at this task, the capabilities it develops during training need to be more broadly applicable. 

Some optional sub-claims can be made that increase the risk level of the threat model:

Claim 1a [Optional]: Capabilities (in different “domains”) will all generalize at the same time

Claim 1b [Optional]: Capabilities will generalize far in a discrete phase transition (rather than continuously) 

Claim 2. Alignment techniques that worked previously will fail during this transition

  • Qualitatively different alignment techniques are needed. The ways the techniques work apply to earlier versions of the AI technology, but not to the new version because the new version gets its capability through something new, or jumps to a qualitatively higher capability level (even if through “scaling” the same mechanisms).

Claim 3: Humans can’t intervene to prevent or align this transition 

  • Path 1: humans don’t notice because it’s too fast (or they aren’t paying attention)
  • Path 2: humans notice but are unable to make alignment progress in time
  • Some combination of these paths, as long as the end result is insufficiently correct alignment
Continue reading

Possible takeaways from the coronavirus pandemic for slow AI takeoff

(Cross-posted to LessWrong. Summarized in Alignment Newsletter #104Thanks to Janos Kramar for helpful feedback on this post.)

As the covid-19 pandemic unfolds, we can draw lessons from it for managing future global risks, such as other pandemics, climate change, and risks from advanced AI. In this post, I will focus on possible implications for AI risk. For a broader treatment of this question, I recommend FLI’s covid-19 page that includes expert interviews on the implications of the pandemic for other types of risks. 

A key element in AI risk scenarios is the speed of takeoff – whether advanced AI is developed gradually or suddenly. Paul Christiano’s post on takeoff speeds defines slow takeoff in terms of the economic impact of AI as follows: “There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.” It argues that slow AI takeoff is more likely than fast takeoff, but is not necessarily easier to manage, since it poses different challenges, such as large-scale coordination. This post expands on this point by examining some parallels between the coronavirus pandemic and a slow takeoff scenario. The upsides of slow takeoff include the ability to learn from experience, act on warning signs, and reach a timely consensus that there is a serious problem. I would argue that the covid-19 pandemic had these properties, but most of the world’s institutions did not take advantage of them. This suggests that, unless our institutions improve, we should not expect the slow AI takeoff scenario to have a good default outcome. 

Continue reading

Retrospective on the specification gaming examples list

My post about the specification gaming list was recently nominated for the LessWrong 2018 Review (sort of like a test of time award), which prompted me to write a retrospective (cross-posted here). 

I’ve been pleasantly surprised by how much this resource has caught on in terms of people using it and referring to it (definitely more than I expected when I made it). There were 30 examples on the list when was posted in April 2018, and 20 new examples have been contributed through the form since then.  I think the list has several properties that contributed to wide adoption: it’s fun, standardized, up-to-date, comprehensive, and collaborative.

Some of the appeal is that it’s fun to read about AI cheating at tasks in unexpected ways (I’ve seen a lot of people post on Twitter about their favorite examples from the list). The standardized spreadsheet format seems easier to refer to as well. I think the crowdsourcing aspect is also helpful – this helps keep it current and comprehensive, and people can feel some ownership of the list since can personally contribute to it. My overall takeaway from this is that safety outreach tools are more likely to be impactful if they are fun and easy for people to engage with.

This list had a surprising amount of impact relative to how little work it took me to put it together and maintain it. The hard work of finding and summarizing the examples was done by the people putting together the lists that the master list draws on (Gwern, Lehman, Olsson, Irpan, and others), as well as the people who submit examples through the form. What I do is put them together in a common format and clarify and/or shorten some of the summaries. I also curate the examples to determine whether they fit the definition of specification gaming (as opposed to simply a surprising behavior or solution). Overall, I’ve probably spent around 10 hours so far on creating and maintaining the list, which is not very much. This makes me wonder if there is other low hanging fruit in the safety resources space that we haven’t picked yet. 

I have been using it both as an outreach and research tool. On the outreach side, the resource has been helpful for making the argument that safety problems are hard and need general solutions, by making it salient just in how many ways things could go wrong. When presented with an individual example of specification gaming, people often have a default reaction of “well, you can just close the loophole like this”. It’s easier to see that this approach does not scale when presented with 50 examples of gaming behaviors. Any given loophole can seem obvious in hindsight, but 50 loopholes are much less so. I’ve found this useful for communicating a sense of the difficulty and importance of Goodhart’s Law. 

On the research side, the examples have been helpful for trying to clarify the distinction between reward gaming and tampering problems. Reward gaming happens when the reward function is designed incorrectly (so the agent is gaming the design specification), while reward tampering happens when the reward function is implemented incorrectly or embedded in the environment (and so can be thought of as gaming the implementation specification). The boat race example is reward gaming, since the score function was defined incorrectly, while the Qbert agent finding a bug that makes the platforms blink and gives the agent millions of points is reward tampering. We don’t currently have any real examples of the agent gaining control of the reward channel (probably because the action spaces of present-day agents are too limited), which seems qualitatively different from the numerous examples of agents exploiting implementation bugs. 

I’m curious what people find the list useful for – as a safety outreach tool, a research tool or intuition pump, or something else? I’d also be interested in suggestions for improving the list (formatting, categorizing, etc). Thanks everyone who has contributed to the resource so far!

Classifying specification problems as variants of Goodhart’s Law

(Coauthored with Ramana Kumar and cross-posted from the Alignment Forum. Summarized in Alignment Newsletter #76.)

There are a few different classifications of safety problems, including the Specification, Robustness and Assurance (SRA) taxonomy and the Goodhart’s Law taxonomy. In SRA, the specification category is about defining the purpose of the system, i.e. specifying its incentives.  Since incentive problems can be seen as manifestations of Goodhart’s Law, we explore how the specification category of the SRA taxonomy maps to the Goodhart taxonomy. The mapping is an attempt to integrate different breakdowns of the safety problem space into a coherent whole. We hope that a consistent classification of current safety problems will help develop solutions that are effective for entire classes of problems, including future problems that have not yet been identified.

The SRA taxonomy defines three different types of specifications of the agent’s objective: ideal (a perfect description of the wishes of the human designer), design (the stated objective of the agent) and revealed (the objective recovered from the agent’s behavior). It then divides specification problems into design problems (e.g. side effects) that correspond to a difference between the ideal and design specifications, and emergent problems (e.g. tampering) that correspond to a difference between the design and revealed specifications.

In the Goodhart taxonomy, there is a variable U* representing the true objective, and a variable U representing the proxy for the objective (e.g. a reward function). The taxonomy identifies four types of Goodhart effects: regressional (maximizing U also selects for the difference between U and U*), extremal (maximizing U takes the agent outside the region where U and U* are correlated), causal (the agent intervenes to maximize U in a way that does not affect U*), and adversarial (the agent has a different goal W and exploits the proxy U to maximize W).

We think there is a correspondence between these taxonomies: design problems are regressional and extremal Goodhart effects, while emergent problems are causal Goodhart effects. The rest of this post will explain and refine this correspondence.

sra-goodhart

Continue reading

Discussion on the machine learning approach to AI safety

At this year’s EA Global London conference, Jan Leike and I ran a discussion session on the machine learning approach to AI safety. We explored some of the assumptions and considerations that come up as we reflect on different research agendas. Slides for the discussion can be found here.

The discussion focused on two topics. The first topic examined assumptions made by the ML safety approach as a whole, based on the blog post Conceptual issues in AI safety: the paradigmatic gap. The second topic zoomed into specification problems, which both of us work on, and compared our approaches to these problems.

Continue reading

Is there a tradeoff between immediate and longer-term AI safety efforts?

Something I often hear in the machine learning community and media articles is “Worries about superintelligence are a distraction from the *real* problem X that we are facing today with AI” (where X = algorithmic bias, technological unemployment, interpretability, data privacy, etc). This competitive attitude gives the impression that immediate and longer-term safety concerns are in conflict. But is there actually a tradeoff between them?

tradeoff

We can make this question more specific: what resources might these two types of efforts be competing for?

Continue reading

Portfolio approach to AI safety research

dimensionsLong-term AI safety is an inherently speculative research area, aiming to ensure safety of advanced future systems despite uncertainty about their design or algorithms or objectives. It thus seems particularly important to have different research teams tackle the problems from different perspectives and under different assumptions. While some fraction of the research might not end up being useful, a portfolio approach makes it more likely that at least some of us will be right.

In this post, I look at some dimensions along which assumptions differ, and identify some underexplored reasonable assumptions that might be relevant for prioritizing safety research. (In the interest of making this breakdown as comprehensive and useful as possible, please let me know if I got something wrong or missed anything important.)

Continue reading