Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Mon, 11 Nov 2024
Dario Amodei is the CEO of Anthropic, the company that created Claude. Amanda Askell is an AI researcher working on Claude's character and personality. Chris Olah is an AI researcher working on mechanistic interpretability. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep452-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/dario-amodei-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Claude: https://claude.ai Anthropic's X: https://x.com/AnthropicAI Anthropic's Website: https://anthropic.com Dario's X: https://x.com/DarioAmodei Dario's Website: https://darioamodei.com Machines of Loving Grace (Essay): https://darioamodei.com/machines-of-loving-grace Chris's X: https://x.com/ch402 Chris's Blog: https://colah.github.io Amanda's X: https://x.com/AmandaAskell Amanda's Website: https://askell.io SPONSORS: To support this podcast, check out our sponsors & get discounts: Encord: AI tooling for annotation & data management. Go to https://encord.com/lex Notion: Note-taking and team collaboration. Go to https://notion.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex BetterHelp: Online therapy and counseling. Go to https://betterhelp.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex OUTLINE: (00:00) - Introduction (10:19) - Scaling laws (19:25) - Limits of LLM scaling (27:51) - Competition with OpenAI, Google, xAI, Meta (33:14) - Claude (36:50) - Opus 3.5 (41:36) - Sonnet 3.5 (44:56) - Claude 4.0 (49:07) - Criticism of Claude (1:01:54) - AI Safety Levels (1:12:42) - ASL-3 and ASL-4 (1:16:46) - Computer use (1:26:41) - Government regulation of AI (1:45:30) - Hiring a great team (1:54:19) - Post-training (1:59:45) - Constitutional AI (2:05:11) - Machines of Loving Grace (2:24:17) - AGI timeline (2:36:52) - Programming (2:43:52) - Meaning of life (2:49:58) - Amanda Askell - Philosophy (2:52:26) - Programming advice for non-technical people (2:56:15) - Talking to Claude (3:12:47) - Prompt engineering (3:21:21) - Post-training (3:26:00) - Constitutional AI (3:30:53) - System prompts (3:37:00) - Is Claude getting dumber? (3:49:02) - Character training (3:50:01) - Nature of truth (3:54:38) - Optimal rate of failure (4:01:49) - AI consciousness (4:16:20) - AGI (4:24:58) - Chris Olah - Mechanistic Interpretability (4:29:49) - Features, Circuits, Universality (4:47:23) - Superposition (4:58:22) - Monosemanticity (5:05:14) - Scaling Monosemanticity (5:14:02) - Macroscopic behavior of neural networks (5:18:56) - Beauty of neural networks
The following is a conversation with Dario Amadei, CEO of Anthropic, the company that created Claude, that is currently and often at the top of most LLM benchmark leaderboards. On top of that, Dario and the Anthropic team have been outspoken advocates for taking the topic of AI safety very seriously, and they have continued to publish a lot of fascinating AI research on this and other topics.
I'm also joined afterwards by two other brilliant people from Anthropic. First, Amanda Askell, who is a researcher working on alignment and fine-tuning of Claude, including the design of Claude's character and personality. A few folks told me she has probably talked with Claude more than any human at Anthropic.
So she was definitely a fascinating person to talk to about prompt engineering and practical advice on how to get the best out of Claude. After that, Chris Ola stopped by for a chat.
He's one of the pioneers of the field of mechanistic interpretability, which is an exciting set of efforts that aims to reverse engineer neural networks to figure out what's going on inside, inferring behaviors from neural activation patterns inside the network. This is a very promising approach for keeping future super-intelligent AI systems safe.
For example, by detecting from the activations when the model is trying to deceive the human it is talking to. And now a quick few second mention of each sponsor. Check them out in the description. It's the best way to support this podcast.
We got Encore for machine learning, Notion for machine learning powered note taking and team collaboration, Shopify for selling stuff online, BetterHelp for your mind. and element for your health. Choose Wisely, my friends. Also, if you want to work with our amazing team, or just want to get in touch with me for whatever reason, go to lexfriedman.com slash contact. And now onto the full ad reads.
I try to make these interesting, but if you skip them, please still check out our sponsors. I enjoy their stuff. Maybe you will too. This episode is brought to you by Encore, a platform that provides data-focused AI tooling for data annotation, curation, and management, and for model evaluation.
We talk a little bit about public benchmarks in this podcast, I think mostly focused on software engineering, SWEBench. There's a lot of exciting developments about how do you have a benchmark that you can't cheat on.
But if it's not public, then you can use it the right way, which is to evaluate how well is the annotation, the data curation, the training, the pre-training, the post-training, all of that, how's that working? Anyway, a lot of the fascinating conversation with the anthropic folks was focused on the language side.
And there's a lot of really incredible work that Encore is doing about annotating and organizing visual data. And they make it accessible for... searching, for visualizing, for granular curation, all that kind of stuff. So I'm a big fan of data. It continues to be the most important thing.
The nature of data, what it means to be good data, whether it's human-generated or synthetic data, keeps changing, but it continues to be the most important thing. component of what makes for a generally intelligent system, I think, and also for specialized intelligent systems as well. Go try out Encore to curate, annotate, and manage your AI data at Encore.com slash Lex.
That's Encore.com slash Lex. This episode is brought to you by the thing that keeps getting better and better and better, Notion. It used to be an awesome note-taking tool. Then it started being a great team collaboration. So note-taking for many people and management of all kinds of other project stuff across large teams.
Now, more and more and more, it's becoming a AI super-powered note-taking and team collaboration tool. Really integrating AI probably better than anything Any note-taking tool I've used, not even close, honestly. Notion is truly incredible. I haven't gotten a chance to use Notion on a large team. I imagine that's real when it begins to shine.
But on a small team, it's just really, really, really amazing.
the integration of the AI assistant inside a particular file for summarization, for generation, all that kind of stuff, but also the integration of an AI assistant to be able to ask questions about, you know, across docs, across wikis, across projects, across multiple files, to be able to summarize everything, maybe investigate project progress based on all the different stuff going on in different files.
So really, really nice integration of AI. Try Notion AI for free when you go to notion.com slash lex. That's all lowercase. Notion.com slash lex to try the power of Notion AI today. This episode is also brought to you by Shopify, a platform designed for anyone to sell anywhere with a great looking online store. I keep wanting to mention Shopify's CEO, Toby, who's brilliant.
And I'm not sure why he hasn't been on the podcast yet. I need to figure that out. Every time I'm in San Francisco, I want to talk to him. So he's brilliant on all kinds of domains, not just entrepreneurship or tech, just philosophy and life, just his way of being. Plus an accent adds to the flavor profile of the conversation. I've been watching a cooking show for a little bit.
Really, I think my first cooking show, it's called Class Wars. It's a South Korean show where chefs with Michelin stars compete against chefs without Michelin stars. And there's something about one of the judges that just, just the charisma and the way that he describes cooking. Every single detail of flavor, of texture, of what makes for a good dish. Yeah, so it's contagious.
I don't really even care. I'm not a foodie. I don't care about food in that way. But he makes me want to care. Anyway, that's why I use the term flavor profile. Referring to Toby, which has nothing to do with what I should probably be saying. And that is that you should use Shopify. I've used Shopify. It's super easy. Create a store, lexfreeman.com slash store to sell a few shirts.
Anyway, sign up for a $1 per month trial period at shopify.com slash lex. That's all lowercase. Go to shopify.com slash lex to take your business to the next level today. This episode is also brought to you by BetterHelp, spelled H-E-L-P, help. They figure out what you need and match you with a licensed therapist in under 48 hours. It's for individuals. It's for couples.
It's easy, discreet, affordable, available worldwide. I saw a few books by a Jungian psychologist, and I was like in a delirious state of sleepiness, and I forgot to write his name down, but I need to do some research. I need to go back.
I need to go back to my younger self when I dreamed of being a psychiatrist and reading Sigmund Freud and reading Carl Jung, reading it the way young kids maybe read comic books. They were my superheroes of sorts. Camus as well, Kafka, Nietzsche, Hesse, Dostoevsky, the sort of 19th and 20th century literary philosophers of sorts.
Anyway, I need to go back to that, maybe have a few conversations about Freud. Anyway, those folks, even if in part wrong or true revolutionaries, were truly brave to explore the mind in the way they did. They showed the power of talking and delving deep into the human mind, into the shadow, through the use of words. So highly recommend. And BetterHelp is a super easy way to start.
Check them out at betterhelp.com slash lux and save on your first month. That's betterhelp.com slash lux. This episode is also brought to you by Element, my daily zero sugar and delicious electrolyte mix that I'm going to take a sip of now. It's been so long that I've been drinking Element that I don't even remember life before Element.
I guess I used to take salt pills because it's such a big component of my exercise routine to make sure I get enough water and get enough electrolytes. Yeah, so combined with fasting that I've explored a lot and continue to do to this day and combined with low carb diets that I'm a little bit off the wagon on that one.
I'm consuming probably like 60, 70, 80, maybe 100 some days grams of carbohydrates. Not good, not good. My happiest is when I'm below 20 grams or 10 grams of carbohydrates. I'm not like measuring it out. I'm just using numbers to sound smart. But I don't take dieting seriously, but I do take the signals that my body sends quite seriously.
So without question, making sure I get enough magnesium and sodium and get enough water is priceless. A lot of times when I have headaches, it just felt off or whatever, we're fixed near immediately. And sometimes after 30 minutes, we just drink water with electrolytes. It's beautiful and it's delicious. Watermelon salt, the greatest flavor of all time.
Get a sample pack for free with any purchase. Try it at drinkelement.com. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Dario Amadei. Let's start with the big idea of scaling laws and the scaling hypothesis. What is it? What is its history? And where do we stand today?
So I can only describe it as it relates to kind of my own experience, but I've been in the AI field for about 10 years. And it was something I noticed very early on. So I first joined the AI world when I was working at Baidu with Andrew Ng in late 2014, which is almost exactly 10 years ago now. And the first thing we worked on was speech recognition systems.
And in those days, I think deep learning was a new thing. It had made lots of progress, but everyone was always saying, we don't have the algorithms we need to succeed. You know, we're not, we're only matching a tiny, tiny fraction. There's so much we need to kind of discover algorithmically. We haven't found the picture of how to match the human brain.
Uh, and when, you know, in some ways it was fortunate. I was kind of, you know, you can have almost beginner's luck, right? I was like a newcomer to the field. And, you know, I looked at the neural net that we were using for speech, the recurrent neural networks. And I said, I don't know, what if you make them bigger and give them more layers and
And what if you scale up the data along with this, right? I just saw these as like independent dials that you could turn. And I noticed that the model started to do better and better as you gave them more data, as you made the models larger, as you trained them for longer.
And I didn't measure things precisely in those days, but along with colleagues, we very much got the informal sense that the more data and the more compute and the more training you put into these models, the better they perform. And so initially my thinking was, hey, maybe that is just true for speech recognition systems, right? Maybe that's just one particular quirk, one particular area.
I think it wasn't until 2017 when I first saw the results from GPT-1. that it clicked for me that language is probably the area in which we can do this. We can get trillions of words of language data. We can train on them. And the models we were trained in those days were tiny.
You could train them on one to eight GPUs, whereas, you know, now we train jobs on tens of thousands, soon going to hundreds of thousands of GPUs. And so when I saw those two things together, and, you know, there were a few people like Ilya Sutskiver, who you've interviewed, who had somewhat similar views, right?
He might have been the first one, although I think a few people came to similar views around the same time, right? There was, you know, Rich Sutton's bitter lesson. There was, Goren wrote about the scaling hypothesis. But I think somewhere between 2014 and 2017 was when it really clicked for me, when I really got conviction that, hey, we're going to be able to do these incredible
incredibly wide cognitive tasks if we just scale up the models. And at every stage of scaling, there are always arguments. And when I first heard them, honestly, I thought, probably I'm the one who's wrong. And all these experts in the field are right. They know the situation better than I do. There's the Chomsky argument about you can get syntactics, but you can't get semantics.
There was this idea, oh, you can make a sentence make sense, but you can't make a paragraph make sense. You know, we're going to run out of data or the data isn't high quality enough or models can't reason. And each time, every time, we manage to either find a way around or scaling just is the way around. Sometimes it's one, sometimes it's the other.
And so I'm now at this point, I still think, you know, it's always quite uncertain. We have nothing but inductive inference to tell us that the next few years are going to be like the last 10 years. But I've seen the movie enough times.
I've seen the story happen for enough times to really believe that probably the scaling is going to continue and that there's some magic to it that we haven't really explained on a theoretical basis yet.
And of course, the scaling here is bigger networks, bigger data, bigger compute.
Yes. All of those. In particular, linear scaling up of bigger networks bigger training times, and more data. So all of these things, almost like a chemical reaction. You have three ingredients in the chemical reaction, and you need to linearly scale up the three ingredients. If you scale up one, not the others, you run out of the other reagents and the reaction stops.
But if you scale up everything in series, then the reaction can proceed.
And of course, now that you have this kind of empirical science slash art, you can apply it to other... more nuanced things like scaling laws applied to interpretability or scaling laws applied to post-training or just seeing how does this thing scale. But the big scaling law, I guess the underlying scaling hypothesis has to do with big networks, big data leads to intelligence.
Yeah, we've documented scaling laws in lots of domains other than language, right? So initially, the paper we did that first showed it was in early 2020, where we first showed it for language. There was then some work late in 2020 where we showed the same thing for other modalities like images, video, text to image, image to text, math, that they all had the same pattern. And you're right.
Now, there are other stages like post-training or there are new types of reasoning models. And in all of those cases that we've measured, we see similar types of scaling laws.
A bit of a philosophical question, but what's your intuition? about why bigger is better in terms of network size and data size. Why does it lead to more intelligent models?
So in my previous career as a biophysicist, so I did physics undergrad and then biophysics in grad school. So I think back to what I know as a physicist, which is actually much less than what some of my colleagues at Anthropic have in terms of expertise in physics.
there's this concept called the 1 over f noise and 1 over x distributions, where often, you know, just like if you add up a bunch of natural processes, you get a Gaussian. If you add up a bunch of kind of differently distributed natural processes. If you like, take a probe and hook it up to a resistor. The distribution of the thermal noise in the resistor goes as one over the frequency.
It's some kind of natural convergent distribution. And I think what it amounts to is that if you look at a lot of things that are produced by some natural process that has a lot of different scales, right? Not a Gaussian, which is kind of narrowly distributed.
But, you know, if I look at kind of like large and small fluctuations that lead to electrical noise, they have this decaying 1 over X distribution. And so now I think of like parallelism. patterns in the physical world, right? Or in language. If I think about the patterns in language, there are some really simple patterns. Some words are much more common than others, like the.
Then there's basic noun-verb structure. Then there's the fact that nouns and verbs have to agree, they have to coordinate. And there's the higher level sentence structure. Then there's the thematic structure of paragraphs. And so the fact that there's this regressing structure, you can imagine that as you make the networks larger, for
First, they capture the really simple correlations, the really simple patterns, and there's this long tail of other patterns. And if that long tail of other patterns is really smooth, like it is with the 1 over F noise in physical processes like resistors, then you can imagine as you make the network larger, it's kind of capturing more and more of that distribution.
And so that smoothness gets reflected in how well the models are at predicting and how well they perform. Language is an evolved process, right? We've developed language. We have common words and less common words. We have common expressions and less common expressions. We have ideas, cliches that are expressed frequently, and we have novel ideas.
And that process has developed, has evolved with humans over millions of years. And so the guess, and this is pure speculation, would be that there's some kind of long tail distribution of the distribution of these ideas.
So there's the long tail, but also there's the height of the hierarchy of concepts that you're building up. So the bigger the network, presumably you have a higher capacity to... Exactly.
If you have a small network, you only get the common stuff, right? If I take a tiny neural network, it's very good at understanding that, you know, a sentence has to have, you know, verb, adjective, noun, right? But it's terrible at deciding what those verb, adjective, and noun should be and whether they should make sense. If I make it just a little bigger, it gets good at that.
Then suddenly it's good at the sentences, but it's not good at the paragraphs. And so these rarer and more complex patterns get picked up as I add more capacity to the network.
Well, the natural question then is what's the ceiling of this? Yeah. How complicated and complex is the real world? How much stuff is there to learn?
I don't think any of us knows the answer to that question. My strong instinct would be that there's no ceiling below the level of humans, right? We humans are able to understand these various patterns. And so that makes me think that if we continue to scale up these models to kind of develop new methods for training them and scaling them up,
that will at least get to the level that we've gotten to with humans. There's then a question of, you know, how much more is it possible to understand than humans do? How much is it possible to be smarter and more perceptive than humans? I would guess the answer has got to be domain dependent.
If I look at an area like biology, and I wrote this essay, Machines of Loving Grace, it seems to me that humans are struggling to understand the complexity of biology, right? If you go to Stanford or to Harvard or to Berkeley, you have whole departments Of, you know, folks trying to study, you know, like the immune system or metabolic pathways and and each person understands only a tiny bit.
Part of it specializes and they're struggling to combine their knowledge with that of with that of other humans. And so I have an instinct that there's there's a lot of room at the top for A.I. to get smarter.
if I think of something like materials in the physical world or, you know, like addressing, you know, conflicts between humans or something like that, I mean, you know, it may be there's only some of these problems are not intractable, but much harder. And it may be that there's only so well you can do with some of these things, right?
Just like with speech recognition, there's only so clear I can hear your speech. So I think In some areas, there may be ceilings that are very close to what humans have done. In other areas, those ceilings may be very far away. And I think we'll only find out when we build these systems. It's very hard to know in advance. We can speculate, but we can't be sure.
And in some domains, the ceiling might have to do with human bureaucracies and things like this, as you write about. Yes. So humans fundamentally have to be part of the loop. That's the cause of the ceiling, not maybe the limits of the intelligence.
Yeah. I think in many cases, you know, in theory, technology could change very fast. For example, all the things that we might invent with respect to biology are But remember, there's a clinical trial system that we have to go through to actually administer these things to humans.
I think that's a mixture of things that are unnecessary and bureaucratic and things that kind of protect the integrity of society. And the whole challenge is that it's hard to tell. It's hard to tell what's going on. It's hard to tell which is which, right? My view is definitely... I think in terms of drug development, my view is that we're too slow and we're too conservative.
But certainly, if you get these things wrong, it's possible to risk people's lives by being too reckless. And so at least some of these human institutions are, in fact, protecting people. So it's all about finding the balance. I strongly suspect that balance is kind of more on the side of pushing to make things happen faster, but there is a balance. If we do hit a limit—
If we do hit a slowdown in the scaling laws, what do you think would be the reason? Is it compute limited, data limited? Is it something else?
Idea limited? So a few things. Now we're talking about hitting the limit before we get to the level of humans and the skill of humans. So I think one that's popular today and I think could be a limit that we run into, like most of the limits, I would bet against it, but it's definitely possible, is we simply run out of data. There's only so much data on the internet.
And there's issues with the quality of the data, right? You can get... hundreds of trillions of words on the internet, but a lot of it is repetitive or it's search engine optimization drivel, or maybe in the future, it'll even be text generated by AIs itself. And so I think there are limits to what can be produced in this way.
That said, we, and I would guess other companies, are working on ways to make data synthetic. where you can use the model to generate more data of the type that you have already or even generate data from scratch.
If you think about what was done with DeepMind's AlphaGo Zero, they managed to get a bot all the way from no ability to play Go whatsoever to above human level just by playing against itself. There was no example data from humans required in the AlphaGo Zero version of it.
The other direction, of course, is these reasoning models that do chain of thought and stop to think and reflect on their own thinking. In a way, that's another kind of synthetic data coupled with reinforcement learning. So my guess is with one of those methods, we'll get around the data limitation or there may be other sources of data that are available.
We could just observe that even if there's no problem with data, as we start to scale models up, they just stop getting better. It seemed to be a reliable observation that they've gotten better. That could just stop at some point for a reason we don't understand. The answer could be that we need to invent some new architecture.
There have been problems in the past with, say, numerical stability of models where it looked like things were leveling off, but actually when we found the right unblocker, they didn't end up doing so. So perhaps there's some new – optimization method or some new technique we need to unblock things.
I've seen no evidence of that so far, but if things were to slow down, that perhaps could be one reason.
What about the limits of compute, meaning the expensive nature of building bigger and bigger data centers?
So right now, I think most of the frontier model companies, I would guess, are operating roughly you know, $1 billion scale plus or minus a factor of three, right? Those are the models that exist now or are being trained now.
I think next year we're going to go to a few billion and then 2026, we may go to, you know, above 10 billion and probably by 2027, their ambitions to build $100 billion clusters. And I think all of that actually will happen. There's a lot of determination to build the compute to do it within this country. And I would guess that it actually does happen.
Now, if we get to 100 billion, that's still not enough compute. That's still not enough scale. Then either we need even more scale or we need to develop some way of doing it more efficiently, of shifting the curve.
I think between all of these, one of the reasons I'm bullish about powerful AI happening so fast is just that if you extrapolate the next few points on the curve, we're very quickly getting towards human level ability, right? Some of the new models that we developed, some reasoning models that have come from other companies,
They're starting to get to what I would call the PhD or professional level, right? If you look at their coding ability, the latest model we released, Sonnet 3.5, the new or updated version, it gets something like 50% on Sweebench. And Sweebench is an example of a bunch of professional, real-world software engineering tasks. At the beginning of the year, I think the state of the art was 3% or 4%.
So in 10 months, we've gone from 3% to 50% on this task. And I think in another year, we'll probably be at 90%. I mean, I don't know, but might even be less than that. We've seen similar things in graduate level math, physics, and biology from models like OpenAI's 01.
So if we just continue to extrapolate this in terms of skill that we have, I think if we extrapolate the straight curve, within a few years, we will get to these models being above the highest professional level in terms of humans. Now, will that curve continue? You've pointed to and I've pointed to a lot of reasons why, you know, possible reasons why that might not happen.
But if the extrapolation curve continues, that is the trajectory we're on.
So Anthropic has several competitors. It'd be interesting to get your sort of view of it all. OpenAI, Google, XAI, Meta. What does it take to win in the broad sense of win in the space?
Yeah, so I want to separate out a couple things, right? So, you know, Anthropic's mission is to kind of try to make this all go well, right? And, you know, we have a theory of change called race to the top, right? Race to the top is about trying to push the other players to do the right thing by setting an example. It's not about being the good guy.
It's about setting things up so that all of us can be the good guy. I'll give a few examples of this. Early in the history of Anthropic, one of our co-founders, Chris Ola, who I believe you're interviewing soon, he's the co-founder of the field of mechanistic interpretability, which is an attempt to understand what's going on inside AI models.
So we had him and one of our early teams focus on this area of interpretability, which we think is good for making models safe and transparent. For three or four years, that had no commercial application whatsoever. It still doesn't today. We're doing some early betas with it, and probably it will eventually. But this is a very, very long research bed and one in which we've
built in public and shared our results publicly. And we did this because we think it's a way to make models safer. An interesting thing is that as we've done this, other companies have started doing it as well. In some cases, because they've been inspired by it. In some cases, because they're worried that,
You know, if if other companies are doing this that look more responsible, they want to look more responsible, too. No one wants to look like the irresponsible actor. And so they adopt this. They adopt this as well. When folks come to Anthropic, interpretability is often a draw. And I tell them the other places you didn't go. Tell them why you came here. And then you.
You see soon that there's interpretability teams elsewhere as well. And in a way, that takes away our competitive advantage because it's like, oh, now others are doing it as well, but it's good for the broader system. And so we have to invent some new thing that we're doing that others aren't doing as well in the hope is to basically bid up the importance of doing the right thing.
And it's not about us in particular, right? It's not about having one particular good guy. Other companies can do this as well. If they join the race to do this, that's the best news ever, right? It's about kind of shaping the incentives to point upward instead of shaping the incentives to point downward.
And we should say this example of the field of mechanistic interpretability is just a rigorous, non-hand wavy way of doing AI safety. Yes. Or it's tending that way.
Trying to. I mean, I think we're still early in terms of our ability to see things, but I've been surprised at how much we've been able to look inside these systems and understand what we see, right? Unlike with the scaling laws, where it feels like there's some law that's deriving these models to perform better,
On the inside, the models aren't, you know, there's no reason why they should be designed for us to understand them, right? They're designed to operate. They're designed to work, just like the human brain or human biochemistry. They're not designed for a human to open up the hatch, look inside and understand them.
But we have found, and, you know, you can talk in much more detail about this to Chris, that when we open them up, when we do look inside them, we find things that are surprisingly interesting.
And as a side effect, you also get to see the beauty of these models. You get to explore the sort of the beautiful nature of large neural networks through the McInturb kind of methodology.
I'm amazed at how clean it's been. I'm amazed at things like induction heads. I'm amazed at things like, you know, that we can, you know, use sparse autoencoders to find these directions within the networks and that the directions correspond to these very clear concepts, right? We demonstrated this a bit with the Golden Gate Bridge quad.
So this was an experiment where we found a direction inside one of the neural network's layers that corresponded to the Golden Gate Bridge. And we just turned that way up. And so we released this model as a demo. It was kind of half a joke for a couple of days, but it was illustrative of the method we developed.
And you could take the Golden Gate, you could take the model, you could ask it about anything, you know, it would be like, you could say, how was your day? And anything you asked, because this feature was activated, it would connect to the Golden Gate Bridge. So it would say, you know, I'm feeling relaxed and expansive, much like the arches of the Golden Gate Bridge, or, you know.
It would masterfully change topic to the Golden Gate Bridge and integrate it. There was also a sadness to it, to the focus it had on the Golden Gate Bridge. I think people quickly fell in love with it. I think. So people already miss it because it was taken down, I think, after a day.
Somehow these interventions on the model where you kind of adjust its behavior somehow emotionally made it seem more human than any other version.
version of the model strong personality strong strong personality it has these kind of like obsessive interests you know we can all think of someone who's like obsessed with something so it does make it feel somehow a bit more human let's talk about the present let's talk about Claude so this year Claude
A lot has happened. In March, Claw 3, Opus Sonnet, Haiku were released. Then Claw 3, 5, Sonnet in July with an updated version just now released. And then also Claw 3, 5, Haiku was released. Okay. Can you explain the difference between Opus, Sonnet, and Haiku and how we should think about the different versions?
Yeah. So let's go back to March when we first released these three models. So our thinking was different companies produce kind of large and small models, better and worse models.
We felt that there was demand both for a really powerful model, you know, that might be a little bit slower that you'd have to pay more for, and also for fast, cheap models that are as smart as they can be for how fast and cheap, right?
Whenever you want to do some kind of like, you know, difficult analysis, like if I, you know, I want to write code, for instance, or, you know, I want to brainstorm ideas or I want to do creative writing, I want the really powerful model. But then there's a lot of practical applications in a business sense where it's like, I'm interacting with a website.
I'm doing my taxes or I'm talking to a legal advisor and I want to analyze a contract. Or we have plenty of companies that are just like, I want to do autocomplete on my IDE or something. And for all of those things, you want to act fast and you want to use the model very broadly. So we wanted to serve... that whole spectrum of needs.
Um, so we ended up with this, uh, you know, this kind of poetry theme. And so what's a really short poem. It's a haiku. And so haiku is the small, fast, cheap model that is, you know, was at the time was really surprisingly, surprisingly, uh, intelligent for how fast and cheap it was. Uh, Sonnet is a medium-sized poem, right? A couple paragraphs. And so sonnet was the middle model.
It is smarter, but also a little bit slower, a little bit more expensive. And opus, like a magnum opus is a large work, opus was the largest, smartest model at the time. So that was the original kind of thinking behind it. Yeah. And our thinking then was, well, each new generation of models should shift that trade-off curve.
So when we released Sonnet 3.5, it has the same, roughly the same, you know, cost and speed as the Sonnet 3 model. Uh, but, uh, it, it increased its intelligence to the point where it was smarter than the original Opus 3 model, uh, especially for code, but, but also just in general.
And so now, you know, we've shown results for a Haiku 3.5 and I believe Haiku 3.5, the smallest new model is about as good as Opus 3, the largest old model. So basically, the aim here is to shift the curve, and then at some point, there's going to be an Opus 3.5. Now, every new generation of models has its own thing. They use new data.
Their personality changes in ways that we kind of try to steer but are not fully able to steer. And so there's never quite that exact equivalence where the only thing you're changing is intelligence. We always try and improve other things, and some things change without us knowing or measuring. So it's very much an inexact science.
In many ways, the manner and personality of these models is more an art than it is a science.
So what is sort of the reason for – the span of time between, say, Cloud Opus 3.0 and 3.5? What takes that time, if you can speak to?
Yeah, so there's different processes. There's pre-training, which is, you know, just kind of the normal language model training. And that takes a very long time. That uses, you know, these days, you know,
tens, you know, tens of thousands, sometimes many tens of thousands of, uh, GPUs or TPUs or tranium, or, you know, what we use different platforms, but, you know, accelerator chips, um, often, often training for months.
Uh, there's then a kind of post-training phase where we do reinforcement learning from human feedback, as well as other kinds of reinforcement learning that, that phase is getting, uh, larger and larger now. And, you know, Often, that's less of an exact science. It often takes effort to get it right.
Models are then tested with some of our early partners to see how good they are, and they're then tested both internally and externally for their safety, particularly for catastrophic and autonomy risks. Uh, so, uh, we do internal testing according to our responsible scaling policy, which I, you know, could talk more about that in detail.
And then we have an agreement with the U S and the UK AI safety Institute, as well as other third-party testers in specific domains to test the models for what are called CBRN risk, chemical, biological, radiological, and nuclear, which are, you know, we don't think that models are
pose these risks seriously yet, but every new model we want to evaluate to see if we're starting to get close to some of these more dangerous capabilities. So those are the phases. And then it just takes some time to get the model working in terms of inference and launching it in the API. So there's just just a lot of steps to actually making a model work.
And of course, we're always trying to make the processes as streamlined as possible, right? We want our safety testing to be rigorous, but we want it to be rigorous and to be automatic, to happen as fast as it can without compromising on rigor. Same with our pre-training process and our post-training process. So it's just like building anything else. It's just like building airplanes.
You want to make them You want to make them safe, but you want to make the process streamlined. And I think the creative tension between those is an important thing in making the models work.
Yeah. Rumor on the street, I forget who was saying that Anthropic has really good tooling. So probably a lot of the challenge here is on the software engineering side is to build the tooling to have like a efficient, low friction interaction with the infrastructure.
you would be surprised how much of the challenges of, you know, building these models comes down to, you know, software engineering, performance engineering, you know, you, you know, from the outside, you might think, oh man, we had this Eureka breakthrough, right? You know, this movie with the science, we discovered it, we figured it out.
But, but, but I think, I think all things, even, even, even, you know, incredible discoveries like, They almost always come down to the details and often super, super boring details. I can't speak to whether we have better tooling than other companies. I mean, you know, I haven't been at those other companies, at least not recently, but it's certainly something we give a lot of attention to.
I don't know if you can say, but from three, from cloud three to cloud three, five, is there any extra pre-training going on? Or is it mostly focused on the post-training? There's been leaps in performance.
Yeah, I think at any given stage, we're focused on improving everything at once. Okay. Um, just, just naturally like there are different teams. Each team makes progress in a particular area in, in, in making a particular, you know, their particular segment of the relay race better. And it's just natural that when we make a new model, we put, we put all of these things in at once.
So the data you have, like the preference data you get from RLHF, is that applicable? Is there ways to apply it to newer models as you get trained up?
Yeah, preference data from old models sometimes gets used for new models, although, of course, it performs somewhat better when it's, you know, trained on the new models. Note that we have this, you know, constitutional AI method such that we don't only use preference data, we kind of, there's also a post-training process where we train the model against itself.
And there's, you know, new types of post-training the model against itself that are used every day. So it's not just RLHF, it's a bunch of other methods as well. Post-training, I think, you know, is becoming more and more sophisticated.
Well, what explains the big leap in performance for the new Sonnet 3.5? I mean, at least in the programming side. And maybe this is a good place to talk about benchmarks. What does it mean to get better? Just the number went up. But, you know, I program, but I also love programming and I claw 35 through cursors, what I use to assist me in programming.
And there was, at least experientially, anecdotally, it's gotten smarter at programming. So what does it take to get it smarter?
We observed that as well, by the way. There were a couple very strong engineers here at Anthropic who all previous code models, both produced by us and produced by all the other companies, hadn't really been useful to them. They said, maybe this is useful to a beginner. It's not useful to me. But
Sonnet 3.5, the original one for the first time, they said, oh my God, this helped me with something that it would have taken me hours to do. This is the first model that has actually saved me time. So again, the waterline is rising. And then I think the new Sonnet has been even better. In terms of what it takes, I mean, I'll just say it's been across the board.
It's in the pre-training, it's in the post-training, it's in various evaluations that we do. We've observed this as well. And if we go into the details of the benchmark, so SWE bench is basically, you know, since you're a programmer, you know, you'll be familiar with like pull requests and, you know, just pull requests are like, you know, like a sort of atomic unit of work.
You know, you could say, you know, I'm implementing one, I'm implementing one thing. And so SweeBench actually gives you kind of a real world situation where the code base is in the current state and I'm trying to implement something that's described in language.
We have internal benchmarks where we measure the same thing and you say, just give the model free reign to like do anything, run anything, edit anything. How well is it able to complete these tasks? And it's that benchmark that's gone from it can do it 3% of the time to it can do it about 50% of the time.
So I actually do believe that if we get – you can gain benchmarks, but I think if we get to 100% on that benchmark in a way that isn't kind of like over-trained or – or game for that particular benchmark, probably represents a real and serious increase in kind of programming ability.
And I would suspect that if we can get to 90, 95%, that it will represent ability to autonomously do a significant fraction of software engineering tasks.
Well, ridiculous timeline question. When is Cloud Opus 3.5 coming out?
Not giving an exact date, but as far as we know, the plan is still to have a Cloud 3.5 Opus.
Are we going to get it before GTA 6 or no?
Like Duke Nukem Forever.
What was that game? There was some game that was delayed 15 years. Was that Duke Nukem Forever? Yeah. And I think GTA is now just releasing trailers.
You know, it's only been three months since we released the first Sonnet.
Yeah. It's the incredible pace of release.
It just tells you about the pace. Yeah. The expectations for when things are going to come out.
So what about 4.0? So how do you think about sort of as these models get bigger and bigger about versioning and also just versioning in general? Why Sonnet 3.5 updated with the date? Why not Sonnet 3.6?
Naming is actually an interesting challenge here, right? Because I think a year ago, most of the model was pre-training. And so you could start from the beginning and just say, okay, we're going to have models of different sizes. We're going to train them all together. And, you know, we'll have a family of naming schemes and then we'll put some new magic into them.
And then, you know, we'll have the next, the next generation. The trouble starts already when some of them take a lot longer than others to train, right? That already messes up your time, time a little bit, but yeah, As you make big improvements in pre-training, then you suddenly notice, oh, I can make better pre-trained model, and that doesn't take very long to do.
But, you know, clearly it has the same, you know, size and shape of previous models. Uh, uh, so I think those two together, as well as the timing, timing issues, any kind of scheme you come up with, uh, you know, the reality tends to kind of frustrate that scheme, right? It tends to kind of break out of the breakout of the scheme. It's not like software where you can say, oh, this is like,
you know, 3.7, this is 3.8. No, you have models with different, different trade-offs. You can change some things in your models. You can train, you can change other things. Some are faster and slower at inference. Some have to be more expensive. Some have to be less expensive. And so I think all the companies have struggled with this.
I think we did very, you know, I think, think we were in a good, good position in terms of naming when we had Haiku, Sonnet and Opus. Great start. We're trying to maintain it, but it's not perfect. So we'll try and get back to the simplicity, but just the nature of the field, I feel like no one's figured out naming. It's somehow a different paradigm from normal software. And so...
we just, none of the companies have been perfect at it. It's something we struggle with surprisingly much relative to how trivial it is for the grand science of training the models. So from the user side,
The user experience of the updated Sonnet 3.5 is just different than the previous June 2024 Sonnet 3.5. It would be nice to come up with some kind of labeling that embodies that. Because people talk about Sonnet 3.5, but now there's a different one. And so how do you refer to the previous one and the new one when there's a distinct improvement? It just makes conversation about it just challenging.
Yeah. Yeah. I definitely think this question of there are lots of properties of the models that are not reflected in the benchmarks. I think I think that's that's definitely the case. And everyone agrees. And not all of them are capabilities. Some of them are, you know, models can be polite or brusque. They can be, you know, very reactive or they can ask you questions.
They can have what feels like a warm personality or a cold personality. They can be boring or they can be very distinctive like Golden Gate Claude was. And we have a whole, you know, we have a whole team kind of focused on, I think we call it Claude character. Amanda leads that team and we'll talk to you about that. But it's still a very inexact science.
And often we find that models have properties that we're not aware of. The fact of the matter is that you can talk to a model 10,000 times and there are some behaviors you might not see. Just like with a human, right? I can know someone for a few months and not know that they have a certain skill or not know that there's a certain side to them. And so I think we just have to get used to this idea.
And we're always looking for better ways of testing our models to demonstrate these capabilities. And And also to decide which are the personality properties we want models to have and which we don't want to have. That itself, the normative question, is also super interesting.
I got to ask you a question from Reddit.
From Reddit. Oh, boy.
You know, there's just this fascinating, to me at least, it's a psychological social phenomenon. where people report that Claude has gotten dumber for them over time. And so the question is, does the user complaint about the dumbing down of Claude 3-5 Sonnet hold any water? So are these anecdotal reports a kind of social phenomena, or is there any cases where Claude would get dumber?
So this actually doesn't apply. This isn't just about Claude. I believe I've seen these complaints for every foundation model produced by a major company. People said this about GPT-4. They said it about GPT-4 Turbo. So a couple things. One, the actual weights of the model, right, the actual brain of the model, that does not change unless we introduce a new model.
There are just a number of reasons why it would not make sense practically to be randomly substituting in substituting in new versions of the model. It's difficult from an inference perspective, and it's actually hard to control all the consequences of changing the weights of the model.
Let's say you wanted to fine tune the model to be like, I don't know, to like, to say certainly less, which, you know, an old version of Sonnet used to do. You actually end up changing a hundred things as well. So we have a whole process for it. And we have a whole process for
modifying the model we do a bunch of testing on it we do a bunch of um like we do a bunch of user testing and early customers so it we both have never changed the weights of the model without without telling anyone and it it wouldn't certainly in the current setup it would not make sense to do that now there are a couple things that we do occasionally do um one is sometimes we run ab tests um
Um, but those are typically very close to when a model is being, is being, uh, released and for a very small fraction of time. Um, so, uh, you know, like the, you know, the, the day before the new sonnet 3.5, I agree. We should have had a better name. It's clunky to refer to it. Um, there were some comments from people that like, it's got, it's got, it's gotten a lot better.
And that's because, you know, a fraction were exposed to, to an AB test for, for those one or for those one or two days. Um, the other is that occasionally the system prompt will change, um, on the system prompt can have some effects, although it's on, it's unlikely to dumb down models. It's unlikely to make them dumber.
Um, and, and, and, and we've seen that while these two things, which I'm listing to be very complete, um, happen relatively, happen quite infrequently. The complaints for us and for other model companies about the model change, the model isn't good at this, the model got more censored, the model was dumbed down, those complaints are constant.
And so I don't want to say people are imagining it or anything, but the models are, for the most part, not changing. If I were to offer a theory, I think it actually relates to one of the things I said before, which is that Models are very complex and have many aspects to them.
And so often, if I ask the model a question, if I'm like, do task X versus can you do task X, the model might respond in different ways. And so there are all kinds of subtle things that you can change about the way you interact with the model that can give you very different results.
To be clear, this itself is like a failing by us and by the other model providers that the models are just often sensitive to like small changes in wording. It's yet another way in which the science of how these models work is very poorly developed.
And so if I go to sleep one night and I was talking to the model in a certain way and I slightly change the phrasing of how I talk to the model, I could get different results. So that's one possible way. The other thing is, man, it's just hard to quantify this stuff. It's hard to quantify this stuff. I think people are very excited by new models when they come out.
And then as time goes on, they become very aware of the limitations. So that may be another effect. But that's all a very long-winded way of saying, for the most part, with some fairly narrow exceptions, the models are not changing.
I think there is a psychological effect. You just start getting used to it. The baseline raises. When people have first gotten Wi-Fi on airplanes... It's like amazing magic.
And now I'm like, I can't get this thing to work. This is such a piece of crap.
Exactly. So it's easy to have the conspiracy theory of they're making Wi-Fi slower and slower. This is probably something I'll talk to Amanda much more about. But another Reddit question. When will Claude stop trying to be my puritanical grandmother imposing its moral worldview on me as a paying customer? And also, what is the psychology behind making Claude overly apologetic?
So this kind of reports about the experience, a different angle on the frustration. It has to do with the character.
Yeah. So a couple points on this first. One is like things that people say on Reddit and Twitter or X or whatever it is. There's actually a huge distribution shift between like the stuff that people complain loudly about on social media and what actually kind of like statistically users care about and that drives people to use the models.
People are frustrated with things like the model not writing out all the code or the model just not being as good at code as it could be, even though it's the best model in the world on code. I think the majority of things are about that, but certainly a kind of vocal minority are you know, kind of raise these concerns, right?
Are frustrated by the model, refusing things that it shouldn't refuse or like apologizing too much or just having these kind of like annoying verbal tics. The second caveat, and I just want to say this like super clearly because I think it's like, some people don't know it. Others like kind of know it, but forget it.
Like it is very difficult to control across the board how the models behave, right? You cannot just reach in there and say, oh, I want the model to like apologize less. Like you can do that. You can include trading data that says like, oh, the model should like apologize less.
But then in some other situation, they end up being like super rude or like overconfident in a way that's like misleading people. So there are all these trade-offs, right? For example, another thing is if there was a period during which models, ours and I think others as well, were too verbose, right? They would like repeat themselves. They would say too much.
You can cut down on the verbosity by penalizing the models for just talking for too long. What happens when you do that, if you do it in a crude way, is when the models are coding, sometimes they'll say, rest of the code goes here, right? Because they've learned that that's a way to economize and that they see it.
So that leads the model to be so-called lazy in coding, where they're just like, ah, you can finish the rest of it. It's not because we want to save on compute or because the models are lazy during winter break or any of the other kind of conspiracy theories that have come up.
It's actually – it's just very hard to control the behavior of the model, to steer the behavior of the model in all circumstances at once. You can kind of – there's this whack-a-mole aspect where you push on one thing and like these – these other things start to move as well that you may not even notice or measure.
And so one of the reasons that I, that I care so much about, uh, you know, kind of grand alignment of these AI systems in the future is actually, these systems are actually quite unpredictable. They're actually quite hard to steer and control. Um, and this version we're seeing today of you make one thing better. It makes another thing worse. Uh,
I think that's, that's like a present day analog of future control problems in AI systems that we can start to study today. Right. I think, I think that, that, that difficulty in, in steering the behavior and in making sure that if we push an AI system in one direction, it doesn't push it in another direction in some, in some other ways that we didn't want. Uh,
I think that's, that's kind of an, that's kind of an early sign of things to come. And if we can do a good job of solving this problem, right. Of like, you ask the model to like, you know, to like make and distribute smallpox and it says no, but it's willing to like help you in your graduate level virology class. Like how do we get both of those things at once? It's hard.
It's very easy to go to one side or the other. And it's a multidimensional problem. And so, uh, I think these questions of shaping the model's personality, I think they're very hard. I think we haven't done perfectly on them. I think we've actually done the best of all the AI companies, but still so far from perfect.
And I think if we can get this right, if we can control the false positives and false negatives in this very kind of controlled present day environment will be much better at doing it for the future when our worry is, will the models be super autonomous? Will they be able to make very dangerous things? Will they be able to autonomously build whole companies and are those companies aligned?
So I think of this present task as both vexing, but also good practice for the future.
What's the current best way of gathering sort of user feedback? Like not anecdotal data, but just large scale data about pain points or the opposite of pain points, positive things, so on. Is it internal testing? Is it a specific group testing, A, B testing? What works?
So typically we'll have internal model bashings where all of Anthropic, Anthropic is almost a thousand people. You know, people just try and break the model. They try and interact with it various ways. Um, uh, we have a suite of evals, uh, for, you know, oh, is the model refusing in ways that it couldn't?
I think we even had a certainly eval because, you know, our, our model, again, one point model had this problem where like it had this annoying tick where it would like respond to a wide range of questions by saying, certainly I can help you with that. Certainly. I would be happy to do that. Certainly this is correct.
Um, uh, and so we had a, like, certainly eval, which is like, how, how often does the model say certainly? Yeah. Uh, uh, but, but look, this is just a whack-a-mole. Like, like what if it switches from certainly to definitely like, uh, uh, so, you know, every time we add a new eval and we're, we're always evaluating for all the old things.
So we have hundreds of these evaluations, but we find that there's no substitute for human interacting with it. And so it's very much like the ordinary product development process. We have like hundreds of people within Anthropic bash the model. Then we do, you know, then we do external AB tests. Sometimes we'll run tests with contractors. We pay contractors to interact with the model.
So you put all of these things together and it's still not perfect. You still see behaviors that you don't quite want to see, right? You know, you still see the model like refusing things that it just doesn't make sense to refuse, right? Um, but I, I, I think trying to, trying to solve this challenge, right.
Trying to stop the model from doing, you know, genuinely bad things that, you know, know what everyone agrees it shouldn't do. Right. You know, everyone, everyone, you know, everyone agrees that, you know, the model shouldn't talk about, you know, I, I don't know, child abuse material. Right. Like everyone agrees the model shouldn't do that.
Uh, but, but at the same time that it doesn't refuse in these dumb and stupid ways. I think drawing that line as finely as possible, approaching perfectly is still a challenge and we're getting better at it every day, but there's a lot to be solved. And again, I would point to that as an indicator of a challenge ahead in terms of steering much more powerful models. Yeah.
Do you think Claude 4.0 is ever coming out? I don't want to commit to any naming scheme because if I say here, we're going to have Claude 4 next year, and then we decide that we should start over because there's a new type of model. I don't want to commit to it. I would expect in a normal course of business that Claude 4 would come after Claude 3.5, but you never know in this wacky field, right?
But sort of this idea of scaling is continuing.
Scaling is continuing. There will definitely be more powerful models coming from us than the models that exist today. That is certain. Or if there aren't, we've deeply failed as a company.
Okay. Can you explain the responsible scaling policy and the AI safety level standards, ASL levels?
As much as I'm excited about the benefits of these models, and we'll talk about that if we talk about machines of loving grace, I'm worried about the risks, and I continue to be worried about the risks. No one should think that machines of loving grace was me saying I'm no longer worried about the risks of these models. I think they're two sides of the same coin.
The the power of the models and their ability to solve all these problems in biology, neuroscience, economic development, government, governance and peace, large parts of the economy. Those those come with risks as well. Right. With great power comes great responsibility. Right. That's the two are the two are paired. Things that are powerful can do good things and they can do bad things.
I think of those risks as being in several different categories. Perhaps the two biggest risks that I think about, and that's not to say that there aren't risks today that are important, but when I think of the things that would happen on the grandest scale, one is what I call catastrophic misuse. These are misuse of the models in domains like cyber, bio, radiological, nuclear, right?
Things that could... harm or even kill thousands, even millions of people if they really, really go wrong. These are the number one priority to prevent. And here, I would just make a simple observation, which is that
The models, you know, if I look today at people who have done really bad things in the world, I think actually humanity has been protected by the fact that the overlap between really smart, well-educated people and people who want to do really horrific things has generally been small. Like, you know, let's say I'm someone who, you know, I have a PhD in this field. I have a well-paying job.
There's so much to lose. Why do I want to like, even assuming I'm completely evil, which most people are not, why would such a person risk their life, risk their legacy, their reputation to do something like truly, truly evil? If we had a lot more people like that, the world would be a much more dangerous place. And so my worry is that by being a much more intelligent agent,
AI could break that correlation. And so I do have serious worries about that. I believe we can prevent those worries, but I think as a counterpoint to machines of loving grace, I wanna say that there's still serious risks. And the second range of risks would be the autonomy risks,
which is the idea that models might on their own, particularly as we give them more agency than they've had in the past, particularly as we give them supervision over wider tasks like writing whole code bases or someday even effectively operating entire companies, They're on a long enough leash. Are they doing what we really want them to do?
It's very difficult to even understand in detail what they're doing, let alone control it. And like I said, these early signs that it's hard to perfectly draw the boundary between things the model should do and things the model shouldn't do, that, you know, If you go to one side, you get things that are annoying and useless, and you go to the other side, you get other behaviors.
If you fix one thing, it creates other problems. We're getting better and better at solving this. I don't think this is an unsolvable problem. I think this is a science, like the safety of airplanes or the safety of cars or the safety of drugs. I don't think there's any big thing we're missing. I just think we need to get better at controlling these models.
And so these are the two risks I'm worried about. And our responsible scaling plan, which I'll recognize is a very long-winded answer to your question. I love it. I love it. for its ability to do both of these bad things. So if I were to back up a little bit, I think we have an interesting dilemma with AI systems where they're not yet powerful enough to present these catastrophes.
I don't know that they'll ever prevent these catastrophes. It's possible they won't, but the case for worry, the case for risk is strong enough that we should act now. And they're getting better very, very fast, right? I testified in the Senate that we might have serious bio risks within two to three years. That was about a year ago. Things have preceded a pace.
So we have this thing where it's surprisingly hard to address these risks because they're not here today. They don't exist. They're like ghosts, but they're coming at us so fast because the models are improving so fast. So how do you deal with something that's not here today, doesn't exist, but is coming at us very fast.
So the solution we came up with for that in collaboration with people like the organization Meter and Paul Cristiano is, okay, what you need for that are you need tests to tell you when the risk is getting close. You need an early warning system.
And so every time we have a new model, we test it for its capability to do these CBRN tasks, as well as testing it for, you know, how capable it is of doing tasks autonomously on its own. And in the latest version of our RSP, which we released in the last in the last month or two,
The way we test autonomy risks is the model, the AI model's ability to do aspects of AI research itself, which when the model, when the AI models can do AI research, they become kind of truly, truly autonomous. And that, you know, that threshold is important for a bunch of other ways. And so what do we then do with these tasks?
The RSP basically develops what we've called an if-then structure, which is if the models pass a certain capability, then we impose a certain set of safety and security requirements on them. So today's models are what's called ASL2. Models that were, ASL 1 is for systems that manifestly don't pose any risk of autonomy or misuse. So for example, a chess playing bot, Deep Blue would be ASL 1.
It's just manifestly the case that you can't use Deep Blue for anything other than chess. It was just designed for chess. No one's going to use it to like, you know, to conduct a masterful cyber attack or to, you know, run wild and take over the world.
ASL2 is today's AI systems where we've measured them and we think these systems are simply not smart enough to autonomously self-replicate or conduct a bunch of tasks and also not smart enough to provide meaningful information about CBRN risks and how to build CBRN weapons above and beyond what can be known from looking at Google.
Uh, in fact, sometimes they do provide information, but, but not above and beyond the search engine, but not in a way that can be stitched together. Um, not, not in a way that kind of end to end is dangerous enough. So ASL three is going to be the point at which, uh, the models are helpful enough to enhance the capabilities of non-state actors, right?
State actors can already do a lot of, unfortunately, to a high level of proficiency, a lot of these very dangerous and destructive things. The difference is that non-state actors are not capable of it. And so when we get to ASL 3, we'll take special security precautions
designed to be sufficient to prevent theft of the model by non-state actors and misuse of the model as it's deployed, will have to have enhanced filters targeted at these particular areas. Cyber, bio, nuclear. Cyber, bio, nuclear, and model autonomy, which is less a misuse risk and more a risk of the model doing bad things itself.
So ASL 4 getting to the point where these models could enhance the capability of a already knowledgeable state actor and or become the main source of such a risk. Like if you wanted to engage in such a risk, the main way you would do it is through a model. And then I think ASL 4 on the autonomy side. It's some amount of acceleration in AI research capabilities with an AI model.
And then ASL 5 is where we would get to the models that are kind of truly capable, that could exceed humanity in their ability to do any of these tasks. And so the point of the if-then structure commitment is basically to say, look, I don't know. I've been working with these models for many years and I've been worried about risk for many years. It's actually kind of dangerous to cry wolf.
It's actually kind of dangerous to say this model is risky. And people look at it and they say this is manifestly not dangerous. Again, it's the... The delicacy of the risk isn't here today, but it's coming at us fast. How do you deal with that? It's really vexing to a risk planner to deal with it. And so this if then structure basically says, look, we don't want to antagonize a bunch of people.
We don't want to harm our own, you know, our kind of own ability to have a place in the conversation by imposing these these. very onerous burdens on models that are not dangerous today. So the if-then, the trigger commitment is basically a way to deal with this. It says you clamp down hard when you can show that the model is dangerous.
And of course, what has to come with that is enough of a buffer threshold that you can you know, you're not at high risk of kind of missing the danger. It's not a perfect framework.
We've had to change it every, you know, we came out with a new one just a few weeks ago and probably going forward, we might release new ones multiple times a year because it's hard to get these policies right, like technically, organizationally, from a research perspective. But that is the proposal. If then commitments and triggers in order to minimize burdens and false alarms now,
but really react appropriately when the dangers are here.
What do you think the timeline for ASL 3 is where several of the triggers are fired? And what do you think the timeline is for ASL 4?
Yeah, so that is hotly debated within the company. We are working actively to prepare ASL 3 security plans security measures as well as ASL 3 deployment measures. I'm not going to go into detail, but we've made a lot of progress on both and we're prepared to be, I think, ready quite soon. I would not be surprised at all if we hit ASL 3 next year.
There was some concern that we might even hit it this year. That's still possible. That could still happen. It's very hard to say, but I would be very, very surprised if it was like 2030. I think it's much sooner than that.
So there's protocols for detecting it, the if-then, and then there's protocols for how to respond to it. Yes. How difficult is the second, the latter?
Yeah, I think for ASL 3, it's primarily about security and about... you know, filters on the model relating to a very narrow set of areas when we deploy the model, because at ASL three, the model isn't autonomous yet. And so you don't have to worry about, you know, kind of the model itself behaving in a bad way, even when it's deployed internally.