Demis Hassabis
👤 PersonPodcast Appearances
There are many theories in psychology and neuroscience as to how we as human scientists do it.
But a good test for it would be something like give one of these modern AI systems a knowledge cutoff of 1901 and see if it can come up with special relativity like Einstein did in 1905.
So it's quite an incredible moment, sort of leafing back to the other pages and seeing Feynman and Marie Curie and Einstein and Niels Bohr.
If it's able to do that, then I think we're on to something really, really important where perhaps we're nearing an AGI.
Another example would be with our AlphaGo program that beat the world champion at Go.
Not only did it win back 10 years ago, it invented new strategies that had never been seen before for the game of Go, this famously Move 37 in game two that is now studied.
Can an AI system come up with a game as elegant, as satisfying, as aesthetically beautiful as Go, not just a new strategy?
And the answer to those things at the moment is no.
So that's one of the things I think that's missing from a true general system, an AGI system, is it should be able to do those kinds of things as well.
Well, so I think the fundamental aspect of this is, can we mimic these intuitive leaps rather than incremental advances that the best human scientists seem to be able to do?
I always say what separates a great scientist from a good scientist is they're both technically very capable, of course, but the great scientist is more creative.
And you just carry on going backwards and you get to put your name in that book.
And so maybe they'll spot some pattern from another subject area that can be
can sort of have an analogy or some sort of pattern matching to the area they're trying to solve.
And I think one day AI will be able to do this, but it doesn't have the reasoning capabilities and some of the thinking capabilities that are going to be needed to make that kind of breakthrough.
I also think that we're lacking consistency.
So you often hear some of our competitors talk about
these modern systems that we have today are PhD intelligences.
I think that's a nonsense.
They're not PhD intelligences.
It's incredible.
They have some capabilities that are PhD level, but they're not in general capable, and that's exactly what general intelligence should be, of performing across the board at the PhD level.
In fact, as we all know, interacting with today's chatbots, if you pose the question in a certain way, they can make simple mistakes with even high school maths
and simple counting.
So that shouldn't be possible for a true AGI system.
So I think that we are maybe, I would say, sort of five to ten years away from having an AGI system that's capable of doing those things
Another thing that's missing is continual learning, this ability to like online teach the system something new or adjust its behavior in some way.
Well, you hear rumors.
And so a lot of these, I think, core capabilities are still missing and maybe scaling will get us there.
But I feel if I was to bet, I think there are probably one or two missing breakthroughs that are still required and will come over the next five or so years.
It's amazingly locked down, actually, in today's age, how they keep it so quiet.
No, I mean, we're not seeing that internally, and we're still seeing a huge rate of progress.
But also, we're sort of looking at things more broadly.
You see with our Genie models and Veo models and recently Nanobanana.
It's bananas.
Yes, it's bananas.
But it's sort of like a national treasure for Sweden.
Well, I think that's the future of a lot of these creative tools is you're just going to sort of vibe with it or just talk to them.
And it'll be consistent enough where, like with Nanobanana, what's amazing about it is that it's an image generator.
It's state-of-the-art and best in class.
But one of the things that makes it so great is its consistency.
It's able to instruct and follow what you want changed and keep everything else the same.
And so you can iterate with it and eventually get the kind of output that you want.
And that's, I think, what the future of a lot of these creative tools is going to be and sort of signals the direction.
And so you hear, you know, maybe AlphaFold is the kind of thing that would be worthy of that recognition.
And people love it, and they love creating with it.
Yeah, I think you're going to see two things, which is the democratization of these tools for everybody to just use and create with without having to learn incredibly complex UXs and UIs like we had to do in the past.
But on the other hand, I think we're also collaborating with filmmakers and top creators and artists.
So they're helping us design what these new tools should be, what features would they want.
People like the director Darren Aronofsky, who's a good friend of mine, an amazing director,
And he's been making and his team is making films using VO and some of our other tools.
And we're learning a lot by observing them and collaborating them.
And they look for impact as well as the scientific breakthrough impact in the real world.
And what we find is that it also superpowers and turbocharges the best professionals too.
Because they're suddenly the best creatives, the professional creatives, they're suddenly able to be 10x, 100x more productive.
They can just try out all sorts of ideas they have in mind, you know, very low cost, and then get to the beautiful thing that they wanted.
So I actually think it's sort of both things are true.
We're democratizing it for everyday use, for YouTube creators and so on.
But on the other hand, at the high end, the people who understand these tools, and it's not everyone can get the same output out of these tools.
There's a skill in that.
as well as the vision and the storytelling and the narrative style of the top creatives.
And I think it just allows them, they really enjoy using these tools, it allows them to iterate way faster.
Yes.
And that can take 20, 30 years to arrive.
Yes.
I actually foresee a world, and I think a lot about this having started in the games industry as a game designer and programmer in the 90s, is that I think the future of entertainment, this is what we're seeing is the beginning of the future of entertainment.
So you just never know how soon it's going to be and whether it's going to be at all.
Maybe some new genre or new art form
and where there's a bit of co-creation.
I still think that you'll have the top creative visionaries.
They will be creating these compelling experiences and dynamic storylines, and they'll be of higher quality even if they're using the same tools than the everyday person can do.
But also, so millions of people will
potentially dive into those worlds but maybe they'll also be able to create co-create certain parts of those worlds and perhaps that you know the the the main creative uh person is almost an editor of that world so that's the kind of things i'm foreseeing in the next few years and i'd actually like to explore ourselves with with with with technologies like genie
So it's a surprise.
I am.
So I also run Isomorphic, which is our spin-out company to revolutionize drug discovery, building on our AlphaFold breakthrough in protein folding.
And of course, knowing the structure of a protein
is only one step in the drug discovery process.
So isomorphic, you can think of it as building many adjacent alpha folds to help with things like designing chemical compounds that don't have any side effects, but bind to the right place on the protein.
And I think we could reduce down drug discovery from taking years, sometimes a decade to do, down to maybe weeks or even days over the next 10 years.
We're building up the platform right now, and we have great partnerships with Eli Lilly.
I think you had the CEO speaking earlier, and Novartis, which are fantastic, and our own internal drug programs.
And I think we'll be entering sort of preclinical phase sometime next year.
That's right, and we're working on cancers and immunology and oncology and we're working with places like MD Anderson.
Yeah, it's a great question.
Actually, for the moment, and I think probably for the next five years or so, we're building what maybe you could call hybrid models.
So AlphaFold itself is a hybrid model where you have the learning component, this probabilistic component you're talking about, which is based on neural networks and transformers and things.
And that's learning from the data you give it, any data you have available.
But also, in a lot of cases with biology and chemistry, there isn't enough data to learn from.
So you also have to build in some of the rules about chemistry and physics that you already know about.
So, for example, with AlphaFold, the angle of bonds between atoms.
And make sure that AlphaFold understood you couldn't have atoms overlapping with each other and things like that.
Now, in theory, it could learn that, but it would waste a lot of the learning capacity.
Well, we sort of see DeepMind now and Google DeepMind as it's become.
So actually, it's better to kind of have that as a constraint in there.
Now, the trick is, with all hybrid systems, and AlphaGo was another hybrid system where there's a neural network learning about the game of Go and what kind of patterns are good.
And then we had Monte Carlo Tree Search on top, which was doing the planning.
And so the trick is, how do you marry up
a learning system with a more handcrafted system, bespoke system, and actually have them work well together.
And that's pretty tricky to do.
what you want to do is when you figure out something where there's one of these hybrid systems, what you ultimately want to do is upstream it into the learning component.
We sort of merged a couple of years back all of the different AI efforts across Google and Alphabet, including DeepMind, put it all together, kind of bringing the strengths of all the different groups together into one division.
So it's always better if you can do end-to-end learning and directly predict the thing that you're after from the data that you're given.
We were the first ones to start doing it seriously in the modern era.
So once you've figured out something using one of these hybrid systems, you then try and go back and reverse engineer what you've done and see if you can incorporate that learning, that information into the learning system.
And this is sort of what we did with AlphaZero, the more general form of AlphaGo.
So AlphaGo had some Go-specific knowledge in it.
But then with AlphaZero, we got rid of that, including the human data, human games that we learned from, and actually just did self-learning from scratch.
And of course,
then it was able to learn any game, not just Go.
Look, interestingly, again, I think both cases are true in the sense that, especially us at Google and at DeepMind, we focus a lot on very efficient models.
that are powerful.
Because we have our own internal use cases, of course, where we need to serve, say, AI overviews to billions of users every day.
And it has to be extremely efficient, extremely low latency, and very cheap to serve.
And so we've kind of pioneered many techniques that allow us to do that, like distillation, where you sort of have a bigger model internally that trains the smaller model.
So you train the smaller model to mimic the bigger model.
And over time, if you look at the progress of the last two years, the model efficiencies are like 10x, even 100x better for the same performance.
And really, the way I describe it now is that we're the engine room of the whole of Google and the whole of Alphabet.
Now, the reason that that isn't reducing demand is because we're still not got to AGI yet.
So also the frontier models, you keep wanting to train and experiment with new ideas at larger and larger scale, whilst at the same time, at the serving side, things are getting more and more efficient.
So both things are true.
And in the end, I think that from the energy perspective,
I think AI systems will give back a lot more to energy and climate change and these kind of things than they take in terms of efficiency of grid systems and electrical systems, material design, new types of properties, new energy sources.
I think AI will help with all of that over the next 10 years that will far outweigh the energy that it uses today.
Wow, okay.
Well, I mean, 10 years, even 10 weeks is a lifetime in AI.
So Gemini, our main model that we're building, but also many of the other models that we also build, the video models and interactive world models, we plug them in all across Google now.
The Brownian field of 10 years for you.
But I do feel like if we will have AGI in the next 10 years, full AGI, and I think that will usher in a new golden era of science, so a kind of new renaissance.
And I think we'll see the benefits of that right across from energy to human health.
So pretty much every product, every surface area has one of our AI models in it.
So billions of people now interact with Gemini models, whether that's through AI Overview, AI Mode, or the Gemini app.
AlphaGo was the big watershed moment, I think, not just for DeepMind and my company, but for AI in general.
And that's just the beginning, you know, we're kind of incorporating into workspace, into Gmail and so on.
So it's a fantastic opportunity really for us to do cutting edge research, but then immediately ship it to billions of users.
Yeah, there's around 5,000 people in my org, in Google DeepMind.
And it's predominantly, I guess, 80% plus engineers and PhD researchers.
So yeah, about 3,000 or 4,000.
Yeah, we can watch it.
Sure.
This was always my aim with AI from a kid, which is to use it to accelerate scientific discovery.
Yeah, so all of these videos, all these interactive worlds that you're seeing, so you're seeing someone actually can control the video.
It's not a static video.
It's just being generated by a text prompt.
And then people are able to control the 3D environment using the arrow keys and the space bar.
So everything you're seeing here is being fully, all these pixels are being generated on the fly.
They don't exist until the player or the person interacting with it goes to that part of the world.
So all of this richness.
And then you'll see in a second... So this is fully generated.
This is not a real video.
This is generated someone painting their room.
And they're painting some stuff on the wall.
And then the player is going to look to the right and then look back.
So now this part of the world didn't exist before, so now it exists.
And then they look back and they see the same painting marks they left just earlier.
And again, every pixel you can see is fully generated.
And then you can type things like person in a chicken suit or a jet ski, and it will just, in real time, include them in the scene.
So it's quite mind-blowing, really.
This model is reverse engineering
intuitive physics.
So, you know, it's watched many millions of videos and YouTube videos and other things about the world.
And just from that, it's kind of reversed engineered how a lot of the world works.
It's not perfect yet, but it can generate a consistent minute or two of interaction as you as the user in many, many different worlds.
There are some videos later on where you can control, you know, a dog on a beach or a jellyfish or that's not limited to just human things.
Yeah, it was trained off of video and some synthetic data from game engines.
And it's just reverse engineered it.
And for me, it's very close to my heart, this project, but it's also quite mind-blowing because in the 90s, in my early career, I used to write video games and AI for video games and graphics engines.
And I remember how hard it was to do this by hand, program all the polygons and the physics engines.
And it's amazing to just see this, do it effortlessly.
All of the reflections on the water and the way materials flow.
and objects behave.
And it's just doing that all out of the box.
so so the reason we're building these kind of models is um we feel and we've always felt uh we're obviously progressing on the normal language models like with our gemini model but from the beginning with gemini we wanted it to be multimodal so we wanted it to input and take any kind of input images audio video and it can output anything
And so we've been very interested in this because for an AI to be truly general, to build AGI, we feel that the AGI system needs to understand the world around us and the physical world around us, not just the abstract world of languages or mathematics.
And of course, that's what's critical for robotics to work.
It's probably what's missing from it today.
And also things like smart glasses, a smart glasses system that helps you in your everyday life.
It's got to understand the physical context that you're in and how the intuitive physics of the world works.
So we think that building these types of models, these Genie models and also Veo, the best text-to-video models,
Those are expressions of us building world models that understand the dynamics of the world, the physics of the world.
If you can generate it, then that's an expression of your system understanding those dynamics.
Yeah, that's right.
So if you look at our Gemini Live version of Gemini, where you can hold up your phone to the world around you, I'd recommend any of you try it.
It's kind of magical what it already understands about the physical world.
You can think of the next step as incorporating that in some sort of more handy device like glasses.
And then it will be an everyday assistant.
It'll be able to recommend things to you as you're walking the streets, or we can embed it into Google Maps.
And then with robotics, we've built something called Gemini robotics models, which are sort of fine-tuned Gemini with extra robotics data.
And what's really cool about that is, and we released some demos of this over the summer, was we've got these tabletop setups of two hands interacting with objects on a table, two robotic hands.
And you can just talk to the robot.
So you can say, you know, put the yellow object into the red bucket or whatever it is, and it will interpret that instruction, that language instruction, into motor movements.
And that's the power of a multimodal model rather than just a robotic-specific model, is that it will be able to bring in real-world understanding to the way you interact with it.
So in the end, it will be the UI, UX that you need as well as the understanding the robots need to navigate the world safely.
Exactly.
That's certainly one strategy we're pursuing is a kind of Android play, if you like, as a kind of robotics, almost an OS layer, cross robotics.
Well, it's a very surreal moment, obviously.
But there's also some quite interesting things about vertically integrating our latest models with specific robot types and robot designs and some kind of end-to-end learning of that too.
So both are actually pretty interesting and we're pursuing both strategies.
Everything about it is surreal.
Yeah, I think there's going to be a place for both.
Actually, I used to be of the opinion maybe five, ten years ago that we'll have form-specific robots for certain tasks.
And I think in industry, industrial robots will definitely be like that, where you can optimize the robot for the specific task, whether it's a laboratory or a production line.
The way they tell you, they tell you like 10 minutes before it all goes live.
You'd want quite different types of robots.
On the other hand, for general use or personal use robotics and just interacting with the ordinary world, the humanoid form factor could be pretty important because, of course, we've designed the physical world around us to be for humans.
And so steps, doorways, all the things that we've designed for ourselves, rather than changing all of those in the real world, it might be easier to design the form factor to work seamlessly with the way we've already designed the world.
So I think there's an argument to be made that the humanoid form factor could be very important for those types of tasks.
You're sort of shell-shocked when you get that call from Sweden.
But I think there is a place also for specialized robotic forms.
Yeah, I do.
And I spend quite a lot of time on this.
And I think we're still, I feel we're still a little bit early on robotics.
I think in the next couple of years, there'll be a sort of real wow moment with robotics.
But I think the
algorithms need a bit more development.
The general purpose models that these robotics models are built on still need to be better and more reliable and better understanding the world around it.
It's the call that every scientist dreams about.
I think that will come in the next couple of years.
And then also on the hardware side, the key is I think eventually we will have millions of robots helping society
and increasing productivity.
But the key there is when you talk to hardware experts is at what point do you have the right level of hardware to go for the scaling option?
And then the ceremony is a whole week in Sweden with the royal family.
Because effectively, when you start building factories around trying to make tens of thousands, hundreds of thousands of particular robot type,
it's harder for you to update, quickly iterate the robot design.
So it's one of those kind of questions where if you call it too early, then the next generation of robot might be invented in six months time that's just more reliable and better and more dexterous.
But of course, I think maybe that's where we are, but I think except that 10 years happens in one year, probably.
1984 might be one of those years.
It's amazing.
Obviously, it's been going for 120 years.
Yeah, I mean, AI to accelerate scientific discovery and help with things like human health is the reason I spent my whole career on AI.
And the most amazing bit is they bring out this Nobel book from the vaults in the safe.
And I think it's the most important thing we can do with AI.
And I feel like if we build AGI in the right way, it will be the ultimate tool for science.
And I think we've been showing at DeepMind a lot of the way of that, obviously AlphaFold most famously, but actually we've applied our AI systems to many branches of science, whether it's material design, helping with controlling plasma and fusion reactors, predicting the weather, solving, you know, mass Olympiad math problems.
And the same types of systems with some extra fine tuning can basically solve a lot of these complex problems.
So I think we're just scratching the surface of what AI will be able to do.
And there are some things that are missing.
So AI today, I would say, doesn't have true creativity in the sense that it can't come up with a new conjecture yet or a new hypothesis.
It can maybe prove something that you give it.
But it's not able to come up with a sort of new idea or new theory itself.
And you get to sign your name next to all the other greats.
So I think that would be one of the tests actually for AGI.
What is that?
Yeah.
Well, I think it's this sort of intuitive leaps that we often celebrate with the best scientists in history and artists, of course.
And maybe it's done through analogy or analogical reasoning.
Yeah, I think those systems would be right on the boundary, right?
So I think most emergent systems, cellular automata, things like that could be modelable by a classical system.
You just sort of do a forward simulation of it and it'd probably be efficient enough.
Of course, there's the question of things like chaotic systems where the initial conditions really matter and then you get to some, you know, uncorrelated end state.
Now, those could be difficult to model.
So I think these are kind of the open questions.
But I think when you step back and look at what we've done with the systems and the problems that we've solved, and then you look at things like VO3 on like video generation, sort of rendering physics and lighting and things like that, you know, really core fundamental things in physics.
It's pretty interesting.
I think it's telling us something quite fundamental about how the universe is structured, in my opinion.
So, you know, in a way, that's what I want to build AGI for, is to help us as scientists answer these questions like P equals MP.
And so if there's one to follow and you can specify the objective function correctly, you know, you don't have to deal with all that complexity, which I think is how we maybe have naively thought about it for decades, those problems.
If you just enumerate all the possibilities, it looks totally intractable.
And there's many, many problems like that.
And then you think, well, it's like 10 to the 300 possible protein structures, 10 to the 170 possible go positions.
All of these are way more than atoms in the universe.
So how could one possibly find the right solution or predict the next step?
But it turns out that it is possible.
And of course, reality in nature does do it.
right?
Proteins do fold.
So that gives you confidence that there must be, if we understood how physics was doing that, in a sense, then, and we could mimic that process, i.e.
model that process, it should be possible on our classical systems is basically what the conjecture is about.
Yes, exactly.
I mean, fluid dynamics, Navier-Stokes equations, these are traditionally thought of as very, very difficult, intractable kind of problems to do on classical systems.
They take enormous amounts of compute, you know, weather prediction systems, you know, these kind of things all involve fluid dynamics calculations.
And, but again, if you look at something like VO, our video generation model, it can model liquids quite well, surprisingly well.
And materials, specular lighting.
I love the ones where, you know, there's people who generate videos where there's like clear liquids going through hydraulic presses and then it's being squeezed out.
I used to write...
physics engines and graphics engines in my early days in gaming.
And I know it's just so painstakingly hard to build programs that can do that.
And yet somehow these systems are reverse engineering from just watching YouTube videos.
So presumably what's happening is it's extracting some underlying structure around how these materials behave.
So perhaps there is some kind of lower dimensional manifold that can be learned if we actually fully understood what's going on under the hood.
That's maybe true of most of reality.
to the extent that it can predict the next frames in a coherent way, that is a form of understanding, right?
Not in the anthropomorphic version of, it's not some kind of deep philosophical understanding of what's going on.
I don't think these systems have that.
But they certainly have modeled enough of the dynamics, you know, put it that way, that they can pretty accurately generate whatever it is, eight seconds of consistent video that by eye, at least, you know, at a glance, it's quite hard to distinguish what the issues are.
And imagine that in two or three more years time.
That's the thing I'm thinking about and how incredible that they will look.
given where we've come from, you know, the early versions of that one or two years ago.
And so the rate of progress is incredible.
And I think I'm like you, it's like a lot of people love all of the stand-up comedians and that actually captures a lot of human dynamics very well and body language.
But actually the thing I'm most impressed with and fascinated by is the physics behavior, the lighting and materials and liquids.
And it's pretty amazing that it can do that.
And I think that shows that it has some notion of at least intuitive physics, right?
How things are supposed to work intuitively, maybe the way that a human child would understand physics, right?
As opposed to, you know, a PhD student really being able to unpack all the equations.
It's more of an intuitive physics understanding.
Yes.
And it's very interesting, you know, even if you were to ask me five, ten years ago, I would have said, even though I was immersed in all of this, I would have said, well, yeah, you probably need to understand intuitive physics.
You know, like if I push this off the table, this glass, it will maybe shatter, you know, and the liquid will spill out.
Right.
So we know all of these things.
But I thought that, you know, and there's a lot of theories in neuroscience, it's called action in perception, where, you know, you need to act in the world to really truly perceive it in a deep way.
And there was a lot of theories about you'd need embodied intelligence or robotics or something, or maybe at least simulated action so that you would understand things like intuitive physics.
But it seems like
You can understand it through passive observation, which is pretty surprising to me.
And again, I think hints at something underlying about the nature of reality, in my opinion, beyond just the cool videos that it generates.
And of course, those next stages is maybe even making those videos interactive.
So one can actually step into them and move around them, which would be really mind blowing, especially given my games background.
So you can imagine.
And then I think, you know, we're starting to get towards what I would call a world model, a model of how the world works, the mechanics of the world, the physics of the world and the things in that world.
And of course, that's what you would need for a true AGI system.
What do you think that looks like?
Well, games were my first love, really.
And doing AI for games was the first thing I did professionally in my teenage years and was the first major AI systems that I built.
And I always want to have I want to scratch that itch one day and come back to that.
So, you know, and I will do, I think.
And I think I sort of dream about, you know, what would I have done back in the 90s if I'd had access to the kind of AI systems we have today?
And I think you could build absolutely mind blowing games.
And I think the next stage is I always used to love making all the games I've made are open world games.
So they're games where there's a simulation and then there's AI characters and then the player interacts with that simulation and the simulation adapts to the way the player plays.
And I always thought they were the coolest games because so games like Theme Park that I worked on where everybody's game experience would be unique to them.
Because you're kind of co-creating the game.
We set up the parameters, we set up initial conditions, and then you as the player are immersed in it, and then you are co-creating it with the simulation.
But of course, it's very hard to program open world games.
You've got to be able to create content, whichever direction the player goes in, and you want it to be compelling, no matter what the player chooses.
And so it was always quite difficult to build things like cellular automata, actually type of those kinds of classical systems, which created some emergent behavior.
But they're always a little bit fragile, a little bit limited.
Now we're maybe on the cusp in the next few years, five, 10 years of having AI systems that can truly create around your imagination.
can sort of dynamically change the story and storytell the narrative around and make it dramatic no matter what you end up choosing.
So it's like the ultimate choose your own adventure sort of game.
And I think maybe we're within reach if you think of a kind of interactive version of VO and then wind that forward five to 10 years and imagine how good it's going to be.
Yeah, exactly.
And so, but what you'd like is a little bit better than just sort of a random generation, right?
So you'd like, and also better than a simple A, B hard code of choice, right?
That's not really open world, right?
As you say, it's just giving you the illusion of choice.
What you want to be able to do is potentially anything in that game environment.
And I think the only way you can do that is to have generated systems, systems that will generate that on the fly.
Of course, you can't create infinite amounts of game assets, right?
It's expensive enough already how AAA games are made today.
And that was obvious to us back in the 90s when I was working on all these games, I think.
maybe Black and White was the game that I worked on early stages of that, that had still probably the best AI, learning AI in it.
It was an early reinforcement learning system that you were looking after this mythical creature and growing it and nurturing it.
And depending how you treated it, it would treat the villagers in that world in the same way.
So if you were mean to it, it would be mean.
If you were good, it would be protective.
And so it was really a reflection of the way you played it.
So actually all of the, I've been working on sort of simulations and AI,
through the medium of games at the beginning of my career.
And really the whole of what I do today is still a follow-on from those early, more hard-coded ways of doing the AI to now fully general learning systems that are trying to achieve the same thing.
One could do that actually in your spare time.
So I'm quite excited about that.
That would be my project if I got the time to do some vibe coding.
I'm actually itching to do that.
And then the other thing is, maybe it's a sabbatical after AGI has been safely stewarded into the world and delivered into the world.
That and then working on my physics theory, as we talked about at the beginning, those would be my two post-AGI projects.
Let's call it that way.
Yeah.
But in my world, they'd be related because it would be an open world simulated game as realistic as possible.
So, you know, what is the universe?
That's speaking to the same question, right?
MP equals MP.
I think all these things are related, at least in my mind.
more sophisticated more diverse ways of living yeah i think so i mean those of us who love games and i still do is is is um you know it's almost can let your imagination run wild right like i i used to love games um
and working on games so much because it's the fusion, especially in the 90s and early 2000s, the sort of golden era, maybe the 80s of the games industry.
And it was all being discovered.
New genres were being discovered.
We weren't just making games.
We felt we were creating a new entertainment medium that never existed before.
especially with these open world games and simulation games where you as the player were co-creating the story.
There's no other entertainment media where you do that, where you as the audience actually co-create the story.
And of course, now with multiplayer games as well, it can be a very social activity and can explore all kinds of interesting worlds in that.
But on the other hand, it's very important to also enjoy and experience the physical world.
But the question is then, you know, I think we're going to have to confront the question again of what is the fundamental nature of reality?
What is going to be the difference between these increasingly realistic simulations and multiplayer ones and emergent and what we do in the real world?
Yes.
And I guess that's maybe the thing that's been haunting me, obsessing me from the beginning of my career.
If you think about all the different things I've done, they're all related in that way.
The simulation, nature of reality, and what is the bounds of what can be modeled.
Sorry for the ridiculous question, but so far, what is the greatest video game of all time?
What's up there?
Well, my favorite one of all time is Civilization, I have to say.
That was the Civilization 1 and Civilization 2, my favorite games of all time.
Yes, exactly.
They take a lot of time, these Civilization games.
So I've got to be careful with them.
I don't know.
It's an interesting one.
I mean, we both love games and it's interesting he wrote games as well to start off with.
It's probably, especially in the era I grew up in where home computers just became a thing in the late 80s and 90s, especially in the UK.
I had a Spectrum and then a Commodore Amiga 500, which is my favorite computer ever.
And that's why I learned all my programming.
And of course, it's a very fun thing to program is to program games.
So I think it's a great way to learn programming, probably still is.
And then, of course, I immediately took it in directions of AI and simulations, which so I was able to express my interest in games and my sort of wider scientific interests altogether.
And then the final thing I think that's great about games is it fuses artistic design,
art with the most cutting edge programming.
So again, in the 90s, all of the most interesting technical advances were happening in gaming, whether that was AI, graphics, physics engines, hardware, even GPUs, of course, were designed for gaming originally.
So everything that was pushing computing forward
in the 90s was due to gaming.
So interestingly, that was where the forefront of research was going on.
And it was this incredible fusion with art, you know, graphics, but also music and just the whole new media of storytelling.
And I love that.
For me, it's a sort of multidisciplinary kind of effort is, again, something I've enjoyed my whole life.
Yes, exactly.
So LLMs are kind of proposing some possible solutions and then you use evolutionary computing on top to find some novel part of the search space.
So
Actually, I think it's an example of very promising directions where you combine LLMs or foundation models with other computational techniques.
Evolutionary methods is one, but you could also imagine Monte Carlo tree search, basically many types of search algorithms or reasoning algorithms sort of on top of or using the foundation models as a basis.
So I actually think there's quite a lot of interesting things to be discovered probably with these sort of hybrid systems, let's call them.
uh being able to simulate evolution and then using that whatever we understand about that nature inspired mechanism to to then do search better and better and better yes so if you think about uh again breaking down the sort of systems we've built uh to their really fundamental core you've got like the model of the of the underlying dynamics of the system um
And then if you want to discover something new, something novel that hasn't been seen before, then you need some kind of search process on top to take you to a novel region of the search space.
And you can do that in a number of ways.
Evolutionary computing is one.
With AlphaGo, we just use Monte Carlo tree search, right?
And that's what found Move37, the new kind of never seen before strategy in Go.
And so that's how you can go beyond potentially what is already known.
So the model can model everything that you currently know about, right?
All the data that you currently have, but then how do you go beyond that?
So that starts to speak about the ideas of creativity.
How can these systems create something new, discover something new,
Obviously, this is super relevant for scientific discovery or pushing science and medicine forward, which we want to do with these systems.
And you can actually bolt on some fairly simple search systems on top of these models and get you into a new region of space.
Of course, you also have to make sure that you're not searching that space totally randomly.
It would be too big.
So you have to have some objective function that you're trying to optimize and hill climb towards and that guides that search.
Yeah, exactly.
So you can get a bit of an extra property out of evolutionary systems, which is some new emergent capability may come about.
Of course, like happened with life.
Interestingly, with naive sort of traditional evolutionary computing methods without LLMs and the modern AI,
The problem with them, they were very well studied in the 90s and early 2000s and some promising results.
But the problem was they could never work out how to evolve new properties, new emergent properties.
You always had a sort of subset of the properties that you put into the system.
But maybe if we combine them with these foundation models, perhaps we can overcome that limitation.
Obviously, natural evolution clearly did because it did evolve new capabilities, right?
So bacteria to where we are now.
So clearly that it must be possible with evolutionary systems to generate new patterns, going back to the first thing we talked about, and new capabilities and emergent properties.
And maybe we're on the cusp of discovering how to do that.
Yeah, and it's amazing, which is a relatively simple algorithm, right, effectively, and it can generate all of this immense complexity.
emerges, obviously running over 4 billion years of time.
But you can think about that as, again, a search process that ran over the physics substrate of the universe for a long amount of computational time, but then it generated all this incredible rich diversity.
Yeah.
I think that's going to be one of the hardest things to mimic or model is this idea of taste or judgment.
I think that's what separates the great scientists from the good scientists.
All professional scientists are good technically, right?
Otherwise they wouldn't have made it that far in academia and things like that.
But then do you have the taste to sort of sniff out what the right direction is, what the right experiment is, what the right question is?
So picking the right question is the hardest part of science and making the right hypothesis.
And that's what today's systems definitely they can't do.
So, you know, I often say it's harder to come up with a conjecture, a really good conjecture than it is to solve it.
So we may have systems soon that can solve pretty hard conjectures.
You know, I am in maths Olympiad problems where we, you know, alpha proof last year, our system got, you know, silver medal in that really hard problems.
Maybe eventually we'll better solve a millennium price kind of problem.
But could a system come up?
with a conjecture worthy of study that someone like Terence Tao would have gone, you know what, that's a really deep question about the nature of maths or the nature of numbers or the nature of physics.
And that is far harder type of creativity.
And we don't really know, systems clearly can't do that.
And we're not quite sure what that mechanism would be, this kind of leap of imagination, like Einstein had when he came up with special relativity and then general relativity with the knowledge he had at the time.
That sweet spot.
of basically advancing the science and splitting the hypothesis space into two, ideally, whether if it's true or not true, you've learned something really useful.
And that's hard.
And making something that's also falsifiable and within the technologies that you currently have available.
So it's a very creative process, actually, highly creative process that I think just a kind of naive search on top of a model won't be enough for that.
That's right.
So when you do like, you know, real blue sky research, there's no such thing as failure, really, as long as you're picking experiments and hypotheses that meaningfully split the hypothesis space.
So, you know, and you learn something, you can learn something kind of equally valuable from an experiment that doesn't work.
That should tell you if you've designed the experiment well and your hypotheses are interesting, it should tell you a lot about where to go next.
And then you're effectively doing a search process and using that information in very helpful ways.
Yeah.
So what I've tried to do throughout my career is I have these really grand dreams and then I try to, as you've noticed, and then I try to break, but I try to break them down.
It's easy to have a kind of a crazy ambitious dream, but the trick is how do you break it down into manageable, achievable interim steps that are meaningful and useful in their own right?
And so virtual cell, which is what I call the project of modeling a cell, I've had this idea of wanting to do that for maybe more like 25 years.
And I used to talk with Paul Nurse, who is a bit of a mentor of mine in biology.
He founded the Crick Institute and won the Nobel Prize in 2001.
We've been talking about it since before in the 90s.
And I come...
used to come back to every five years is like, what would you need to model the full internals of a cell so that you could do experiments on the virtual cell and what those experiment, you know, in silico and those predictions would be useful for you to save you a lot of time in the wet lab, right?
That would be the dream.
Maybe you could 100x speed up experiments by doing most of it in silico, the search in silico, and then you do the validation step in the wet lab.
That would be, that's the dream.
And so, but maybe now, finally, so I was trying to build these components, alpha fold being one, that would allow you eventually to model the full interaction, a full simulation of a cell.
And I'd probably start with a yeast cell, and partly that's what Paul Nurse studied, because a yeast cell is like a full organism that's a single cell, right?
So it's the kind of simplest single cell organism.
And so it's not just a cell, it's a full organism.
And
And yeast is very well understood.
And so that would be a good candidate for a kind of full simulated model.
Now, alpha fold is the solution to the kind of static picture of what does a protein look, 3D structured protein look like, a static picture of it.
But we know that biology, all the interesting things happen with the dynamics, the interactions.
And that's what AlphaFold3 is the first step towards is modeling those interactions.
So first of all, pairwise, you know, proteins with proteins, proteins with RNA and DNA.
But then the next step after that would be modeling maybe a whole pathway, maybe like the TOR pathway that's involved in cancer or something like this.
And then eventually you might be able to model, you know, a whole cell.
you know super fast yes um i don't know all the biological mechanisms but some of them take a long time yeah and so is that that's a level so the levels of interaction has a different temporal scale that you have to be able to model so that would be hard so you'd probably need several simulated systems that can interact at these different temporal dynamics or at least maybe it's like a hierarchical system so um you can jump up and down the the different temporal stages
So you've got to make a decision when you're modeling any natural system, what is the cutoff level of the granularity that you're going to model it to that then captures the dynamics that you're interested in.
So probably for a cell, I would hope that would be the protein level and that one wouldn't have to go down to the atomic level.
So, you know, and of course, that's where alpha fold stock kicks in.
So that would be kind of the basis.
And then you'd build these higher level simulations that take those as building blocks.
And then you get the emergent behavior.
I think that's one of the, of course, one of the deepest and most fascinating questions.
I love that area of biology.
There's a great book by Nick Lane, one of the top experts in this area called The 10 Great Inventions of Evolution.
I think it's fantastic.
And it also speaks to what the great filters might be prior or are they ahead of us?
I think they're most likely in the past if you read that book of how unlikely to go have any life at all,
And then single cell to multi-cell seems an unbelievably big jump that took like a billion years, I think, on Earth to do, right?
So it shows you how hard it was.
For a very long time before they captured mitochondria somehow, right?
I don't see why not, why AI couldn't help with that, some kind of simulation.
Again, it's a bit of a search process through a combinatorial space.
Here's like all the, you know, the chemical soup that you start with, the primordial soup that, you know, maybe was on Earth near these hot vents.
Here's some initial conditions.
Can you generate something that looks like a cell?
So perhaps that would be a next stage after the virtual cell project is, well, how could you actually something like that emerge from the chemical soup?
of from non-living to living and it's not a line that it's a continuum that connects physics and chemistry and biology yeah there's no line i mean this is my whole reason why i worked on ai and agi my whole life because i think it can be the ultimate tool to help us answer these kind of questions and i don't really understand why um you know the average person doesn't think like worry about this stuff more like how
How can we not have a good definition of life and not living and non-living and the nature of time and let alone consciousness and gravity and all these things.
It's just, and quantum mechanics weirdness.
It's just, to me, I've always had this sort of screaming at me in my face.
And it's getting louder.
It's like, what is going on here?
And I mean that in a deeper sense, like in the nature of reality, which has to be the ultimate question that would answer all of these things.
It's sort of crazy if you think about it.
We can stare at each other and all these living things all the time.
We can inspect it in microscopes and take it apart almost down to the atomic level.
And yet we still can't answer that clearly in a simple way, that question of how do you define living?
It's kind of amazing.
Yeah.
So we're quite, I guess we've developed a lot of mechanisms to cope with this, uh, these deep mysteries that we can't fully, we can see, but we can't fully understand.
And we have to have to just get on with daily life and, and, and we get, we keep ourselves busy, right?
In a way, did we keep ourselves distracted?
Yes, especially in England.
Yes.
We've created the best weather prediction systems in the world, and they're better than traditional fluid dynamics sort of systems that are usually calculated on massive supercomputers, takes days to calculate it.
We've managed to model a lot of the weather dynamics with neural network systems.
with our Weather Next system.
And again, it's interesting that those kinds of dynamics can be modeled, even though they're very complicated, almost bordering on chaotic systems in some cases.
A lot of the interesting aspects of that can be modeled by these neural network systems, including very recently we had, you know, cyclone prediction of where, you know, paths of hurricanes might go, of course, super useful, super important for the world.
And it's super important to do that very timely and very quickly and as well as accurately.
And I think it's a very promising direction, again, of, you know, simulating and so that you can run forward predictions and simulations of very complicated real world systems.
Yeah, hopefully they are.
And I'd love to join them in one of those checks.
They look amazing, right?
To actually experience it one time.
um so there's interesting questions around that how will we actually know that we got there uh and uh what may be the move quote move 37 of agi my estimate is sort of 50 chance by in the next five years so you know by 2030 let's say and uh so i think there's a good chance that that could happen part of it is what is your definition of agi of course people arguing about that now and
And mine's quite a high bar and always has been of like, can we match the cognitive functions that the brain has?
Right.
So we know our brains are pretty much general Turing machines, approximate.
And of course, we created incredible modern civilization with our minds.
So that also speaks to how general the brain is.
And for us to know we have a true AGI, we would have to make sure that it has all those capabilities.
It isn't kind of a jagged intelligence where some things it's really good at, like today's systems, but other things it's really flawed at.
And that's what we currently have with today's systems.
They're not consistent.
So you'd want that consistency of intelligence across the board.
And then we have some missing, I think, capabilities, like the true invention capabilities and creativity that we were talking about earlier.
So you'd want to see those.
How you test that, I think you just test it.
One way to do it would be a brute force test of tens of thousands of cognitive tasks that we know that humans can do, and maybe also make the system available to
a few hundred of the world's top experts, the Terence Towers of each subject area, and see if they can find, you know, give them a month or two and see if they can find an obvious flaw in the system.
And if they can't, then I think you're pretty, you know, you can be pretty confident we have a fully general system.
This is special.
Exactly.
So I think there's the sort of blanket testing to just make sure you've got the consistency.
But I think there are the sort of lighthouse moments like the move 37 that I would be looking for.
So one would be inventing a new conjecture or a new hypothesis about physics like Einstein did.
So maybe you could even run the back test of that very rigorously, like
Have a cutoff of knowledge, cutoff of 1900, and then give the system everything that was written up to 1900 and then see if it could come up with special relativity and general relativity, right?
Like Einstein did.
That would be an interesting test.
Another one would be, can it invent a game like Go?
not just come up with move 37, a new strategy, but can it invent a game that's as deep, as aesthetically beautiful, as elegant as go?
And those are the sorts of things I would be looking out for.
And probably a system being able to do several of those things, right?
For it to be very general, not just one domain.
And so I think that would be the signs, at least that I would be looking for, that we've got a system that's AGI level.
And then maybe to fill that out, you would also check the consistency, you know, make sure there's no holes,
in that system either.
Yeah, that would be amazing.
So it's not just helping us do that, but actually coming up with something brand new.
Something like that.
Exactly.
It's like, what is this amazing physics idea?
And then we would probably check it with world experts in that domain, right?
And validate it and kind of go through its workings.
And I guess it would be explaining its workings too.
Yeah.
Be an amazing moment.
Well, it may be pretty complicated.
So it could be the analogy I give there is I don't think it will be totally mysterious to the best human scientists.
But it may be a bit like, for example, in chess, if I was to talk to Garry Kasparov or Magnus Carlsen and play a game with them and they make a brilliant move.
I might not be able to come up with that move, but they could explain why afterwards that move made sense.
And we would be able to understand it to some degree, not to the level they do.
But, you know, if they were good at explaining, which is actually part of intelligence, too, is being able to explain in a simple way that what you're thinking about.
I think that that will be very possible for the best human scientists.
It could be.
But then afterwards, they'll figure out with their intuition why this works.
And then empirically, the nice thing about games is one of the great things about games is it's a sort of scientific test.
Do you win the game or not win?
And then that tells you
okay, that move in the end was good.
That strategy was good.
And then you can go back and analyze that and explain even to yourself a little bit more why, explore around it.
And that's how chess analysis and things like that work.
So perhaps that's why my brain works like that, because I've been doing that since I was four.
And it's sort of hardcore training in that way.
Yeah, and they're going to be pretty complicated to do, but of course it will be, you can imagine also AI systems that are producing that code or whatever that is, and then human programmers looking at it, but also not unaided with the help of AI tools as well.
So it's going to be kind of an interesting, you know, maybe different AI tools to the ones that they're more, you know, kind of monitoring tools to the ones that generated it.
few versions beyond that what does that actually look like do you think it will be simple you think it will be something like a self-improving program and a simple one i mean potentially that's possible i would say i'm not sure it's even desirable because that's a kind of like hard takeoff scenario yeah but but you you these current systems like alpha evolve they have you know human in the loop deciding on various things they're separate hybrid systems that interact
One could imagine eventually doing that end-to-end.
I don't see why that wouldn't be possible.
But right now, I think the systems are not good enough to do that in terms of coming up with the architecture of the code.
And again, it's a little bit reconnected to this idea of coming up with a new conjectural hypothesis.
They're good if you give them very specific instructions about what you're trying to do.
But if you give them a very vague high-level instruction, that wouldn't work currently.
And I think that's related to this idea of invent a game as good as go.
Imagine that was the prompt.
That's pretty underspecified.
And so the current systems wouldn't know, I think, what to do with that, how to narrow that down to something tractable.
And I think there's similar, like, look, just make a better version of yourself.
That's too unconstrained.
But we've done it in, as you know, with Alpha Evolve, things like faster matrix multiplication.
So when you hone it down to a very specific thing you want, it's very good at incrementally improving that.
But at the moment, these are more like incremental improvements, sort of small iterations.
Whereas if you wanted a big leap in understanding, you'd need a much larger advance.
Yes.
If it was just incremental improvements, that's how it would look.
So the question is, could it come up with a new leap like the Transformers architecture?
Could it have done that back in 2017 when we did it and Brain did it?
And it's not clear that these systems, something like AlphaVol wouldn't be able to do, make such a big leap.
So for sure, these systems are good.
We have systems, I think, that can do incremental hill climbing.
Mm-hmm.
And that's a kind of bigger question about, is that all that's needed from here?
Or do we actually need one or two more big breakthroughs?
Yeah, I don't think anyone has systems that have shown unequivocally those big leaps.
We have a lot of systems that do the hill climbing of the S-curve that you're currently on.
Yeah, I think it would be a leap, something like that.
We certainly feel there's a lot more room just in the scaling.
So actually all steps, pre-training, post-training, and scaling,
inference time.
So there's sort of three scalings that are happening concurrently.
And we, again, there, it's about how innovative you can be.
And we, you know, we pride ourselves on having the broadest and deepest research bench.
We have amazing, you know, incredible researchers and people like Noam Shazia, who, you know, came up with Transformers and Dave Silver, you know, who led the AlphaGo project and so on.
And, um, it's, it's, it's that research base means that if some new, new breakthrough is required, like an alpha go or transformers, uh, I would back us to be the place that does that.
So I'm actually quite like it when the terrain gets harder, right?
Because then it veers more from just engineering to performance.
to true research and, you know, research plus engineering.
And that's our sweet spot.
And I think that's harder.
It's harder to invent things than to, you know, fast follow.
And so, you know, we don't know.
I would say it's kind of 50-50 whether new things are needed or whether the scaling the existing stuff is going to be enough.
And so in true kind of empirical fashion, we're pushing both of those as hard as possible.
The new blue sky ideas and, you know, maybe about half our resources are on that.
And then scaling to the max the current capabilities.
And we're still seeing some, you know, fantastic progress on each different version of Gemini.
Well, I mean, if you look at the history of the last decade or 15 years, it's been, you know, maybe, I don't know, 80, 90% of the breakthroughs that underpins modern AI field today was from, you know, originally Google Brain, Google Research and DeepMind.
So, yeah, I would back that to continue, hopefully.
I'm not very worried about that, partly because I think there's enough data and it's been proven to get the systems to be pretty good.
And this goes back to simulations again.
Do you have enough data to make simulations so that you can create more data?
synthetic data that are from the right distribution.
Obviously, that's the key.
So you need enough real world data in order to be able to create those kinds of data generators.
And I think that we're at that step at the moment.
Exactly.
Yeah.
I think so, for several reasons.
I think there's the amount of compute you have for training.
Often it needs to be co-located.
So actually even bandwidth constraints between data centers can affect that.
So there's additional constraints even there.
And that's important for training, obviously, the largest models you can.
But there's also, because now AI systems are in products and being used by billions of people around the world, you need a ton of inference compute now.
And then on top of that, there's the thinking systems, the new paradigm of the last year where they get smarter the longer amount of inference time you give them at test time.
So all of those things need a lot of compute.
And I don't really see that slowing down.
And as AI systems become better, they'll become more useful and there'll be more demand for them.
So both from the training side, the training side actually is only just one part of that.
It may even become the smaller part of what's needed in the overall compute that's required.
Yeah, yeah, exactly.
We did a little video of the servers frying eggs and things, and that's right.
And we're going to have to figure out how to do that.
There's a lot of interesting hardware innovations that we do.
As you know, we have our own TPU line, and we're looking at, like, inference-only things, inference-only chips, and how we can make those more efficient.
We're also very interested in building AI systems, and we have done the help with energy usage.
So help data center energy, like for the cooling systems, be efficient, grid optimization, and then eventually things like helping with plasma containment fusion reactors.
We've done lots of work on that with Commonwealth Fusion, and also one could imagine reactor design.
And then material design, I think, is one of the most exciting new types of
Solar material, solar panel material, room temperature superconductors has always been on my list of dream breakthroughs and optimal batteries.
And I think a solution to any, you know, one of those things would be absolutely revolutionary for, you know, climate and energy usage.
And we're probably close, you know, and again, in the next five years to having AI systems that can materially help with those problems.
I think fusion and solar are the two that I would bet on.
Solar, I mean, you know, it's the fusion reactor in the sky, of course.
And I think really the problem there is batteries and transmission.
So, you know, as well as more efficient, more and more efficient solar material, perhaps eventually, you know, in space, you know, these kind of Dyson sphere type ideas.
And fusion, I think, is definitely doable, it seems, if we have the right design, but
of reactor and we can control the plasma and fast enough and so on.
And I think both of those things will actually get solved.
So we'll probably have at least, those are probably the two primary sources of renewable, clean, almost free, or perhaps free energy.
I would not be that surprised if there's a, like a hundred year timescale from here.
I mean, I think it's pretty clear if we crack the energy problems in one of the ways we've just discussed fusion or, or very efficient solar.
Um,
then if energy is kind of free and renewable and clean, then that solves a whole bunch of other problems.
So for example, the water access problem goes away because you can just use desalination.
We have the technology, it's just too expensive.
So only fairly wealthy countries like Singapore and Israel and so on actually use it.
But if it was cheap, then all countries that have a coast could.
But also you'd have unlimited rocket fuel.
You could just separate seawater out into hydrogen and oxygen using energy, and that's rocket fuel.
So combined with Elon's amazing self-landing rockets, then it could be like a bus service to space.
So that opens up space.
incredible new resources and domains.
Asteroid mining, I think, will become a thing and maximum human flourishing to the stars.
That's what I dream about as well as Carl Sagan's idea of bringing consciousness to the universe, waking up the universe.
I think human civilization will do that in the full sense of time if we get AI right and crack some of these problems with it.
Because for the first time in human history, we wouldn't be resource constrained.
And I think that could be an amazing new era for humanity where it's not...
zero sum, right?
I have this land, you don't have it.
Or if we take, you know, if the tigers have their forest, then the local villagers can't, what are they going to use?
I think that this will help a lot.
No, it won't solve all problems because there's still other human foibles that will still exist.
But
it will at least remove one, I think, one of the big vectors, which is scarcity of resources, you know, including land and more materials and energy.
And, you know, we should be, I sometimes call it like, and others call it about this kind of radical abundance era where there's plenty of resources to go around.
Of course, the next big question is making sure that that's fairly, you know, shared fairly and everyone in society benefits from that.
Yeah.
And I think, I mean, I think that's what my modern sport is.
So, and I love football watching it and, and I just feel like, uh, and I used to play it a lot as well.
And it's, it's, it's.
it's, it's very visceral and it's tribal.
And I think it does channel a lot of those energies into a, which I think is a kind of human need to belong to some, some group.
And, um, but into a, into a, into a fun way, a healthy way and, and a not, a not destructive way, kind of constructive thing.
And I think going back to games again is I think the originally why they're so great as well for kids to play things like chess is they're great little microcosm simulations of the world.
They're simulations of the world too.
They're simplified versions of some real world situation, whether it's poker or go or chess, different aspects or diplomacy, different aspects of the real world.
And it allows you to practice at them too.
And because, you know, how many...
times do you get to practice a massive decision moment in your life?
You know, what job to take, what university to go to, you know, you get maybe, I don't know, a dozen or so key decisions one has to make, and you've got to make those as best as you can.
And games is a kind of safe environment, repeatable environment where you can get better at your decision-making process.
And it maybe has this additional benefit of channeling some energies into more creative and constructive pursuits.
Yeah, and I think in martial arts, as I understand it, but also in things like chess, at least the way I took it, it's a lot to do with self-improvement.
Self-knowledge, you know, that, okay, so I did this thing.
It's not about really being the other person.
It's about maximizing your own potential.
If you do it in a healthy way, you learn to use victory and losses in a way.
Don't get carried away with victory.
and think you're just the best in the world.
And the losses keep you humble and always knowing there's always something more to learn.
There's always a bigger expert that you can mentor you.
You know, I think you learn that, I'm pretty sure, in martial arts.
And I think that's also the way that at least I was trained in chess.
And so...
in the same way.
And it can be very hardcore and very important.
And of course you want to win, but you also need to learn how to deal with setbacks in a, in a healthy way that, and, and, and, and why are that, that feeling that you have when you lose something into a constructive thing of next time I'm going to improve this, right.
Or get better at this.
Yes.
The mastery.
Yeah.
There's nothing more satisfying in a way is like, oh, wow, this thing I couldn't do before.
Now I can.
And again, games and physical sports and mental sports, their ways of measuring, they're beautiful because you can measure that progress.
Yeah.
Yeah.
We're quite, we're quite addicted to this sort of, yeah, these numbers going up and, uh, and, and, and maybe that's why we made games like that because obviously that is something we're, we're, we're hill climbing systems ourselves.
Right.
Yeah, well, firstly, it's absolutely incredible team that we have, you know, led by Corey and Jeff Dean and and Oriol and the amazing team we have on Gemini.
Absolutely world class.
So you can't do it without the best talent.
And of course, you have you know, we have a lot of great compute as well.
But then it's the research culture we've created.
Right.
And basically coming together together.
both different groups in Google, you know, there was Google Brain, world-class team, and then the old DeepMind and pulling together all the best people and the best ideas and gathering around to make the absolute greatest system we could.
And it was being hard, but we're all very competitive and we, you know, love research.
This is so fun to do.
And we, you know, it's great to see our trajectory.
It wasn't a given, but we're very pleased with
Um, the, the, where we are in the rate of progress is the most important thing.
So if you look at where we've come to from two years ago to one year ago to now, you know, I think our, we call it relentless progress along with relentless shipping of that progress is, um, being very successful and.
You know, it's unbelievably competitive, the whole space, the whole AI space with some of the greatest entrepreneurs and leaders and companies in the world all competing now because everyone's realized how important AI is.
And it's very, you know, been pleasing for us to see that progress.
Right, it is insane.
Yeah, exactly.
That's what relentlessness looks like.
I think it's a question of like any big company, you know, ends up having a lot of layers of management and things like that.
It's sort of the nature of how it works.
But I still operate and I was always operating with Old Deep Mind as a startup still.
Large one.
but still as a startup.
And that's what we still act like today with Google DeepMind and acting with decisiveness and the energy that you get from the best smaller organizations.
And we try to get the best of both worlds where we have this incredible billions of users, surfaces, incredible products that we can power up with our AI and our
And that's amazing.
And you can, you know, that's very few places in the world you can get that do incredible world-class research on the one hand, and then plug it in and improve billions of people's lives the next day.
Uh, that's a pretty amazing combination.
And.
And we're continually fighting and cutting away bureaucracy to allow the research culture and the relentless shipping culture to flourish.
And I think we've got a pretty good balance whilst being responsible with it, you know, as you have to be as a large company and also with a number of, you know, huge product surfaces that we have.
Yeah, I know.
It's funny because if you live on X and Twitter and I mean, it's sort of at least my feed, it's all AI.
And there's certain places where, you know, in the valley and certain pockets where everyone's just all they're thinking about is AI.
But
A lot of the normal world hasn't come across it yet.
Right.
And we want it to be as good as possible.
And in a lot of cases, it's just under the hood, powering, making something like maps or search work better.
And ideally, for a lot of those people, it should just be seamless.
It's just new technology that makes their lives more productive and helps them.
Yeah, well, I mean, again, that comes back from my game design days where I used to design games for millions of gamers.
People forget about that.
I've had experience with cutting edge technology in product.
That is how games was in the 90s.
And so I love actually the combination of cutting edge research and then being applied in a product.
and to power a new experience and so um i think it's the same skill really of of you know imagining what it would be like to use it viscerally um and having good taste coming back to earlier the same thing that's useful in science um i think is is can also be useful in in product design and um
I've just had a very, you know, always been a sort of multidisciplinary person.
So I don't see the boundaries really between, you know, arts and sciences or product and research.
It's a continuum for me.
I mean, I only work on, I like working on products that are cutting edge.
I wouldn't be able to, you know, have cutting edge technology under the hood.
I wouldn't be excited about them if they were just run-of-the-mill products.
So it requires this invention creativity capability.
Yeah.
Sure.
Well, look, I felt that it's sort of a tradition, I think, of Nobel Prize lectures that you're supposed to be a little bit provocative.
I mean, it's such a fast evolving space.
We're evaluating this all the time, but where we are today is that you want to continually simplify things, whether that's the interface or what you build on top of the model.
You kind of want to get out of the way of the model.
The model train is coming down the track and it's improving unbelievably fast.
This relentless progress we talked about earlier, you know, you look at 2.5 versus 1.5 and it's just a gigantic improvement.
And we expect that again for the future versions.
And so the models are becoming more capable.
And I wanted to follow that tradition.
So you've got the interesting thing about the design space in today's world, these AI first products is you've got to design not for what the thing can do today, the technology can do today, but in a year's time.
So you actually have to be a very technical product person.
Because you've got to kind of have a good intuition for and feel for, okay, that thing that I'm dreaming about now can't be done today.
What I was talking about there is if you take a step back and you look at all the work that we've done, especially with the Alpha X projects.
But is the research track on schedule to basically intercept that in six months or a year's time?
So you kind of got to intercept where this highly changing technology is going.
as well as the new capabilities are coming online all the time that you didn't realize before that can allow like deep search to work.
Or now we've got video generation.
What do we do with that?
This multimodal stuff, you know, is it, one question I have is, is it really going to be the current UI that we have today?
These text box chats?
It seems very unlikely once you think about these super multimodal systems.
Shouldn't it be something more like minority report where you're sort of vibing with it in a kind of collaborative way?
It seems very restricted today.
I think we'll look back on today's interfaces and products and systems as quite archaic in maybe in just a couple of years.
So I think there's a lot of space actually for innovation to happen on the product side as well as the research side.
So I'm thinking AlphaGo, of course, AlphaFold.
Yeah, I mean, typing is a very low bandwidth way of doing it, even if you're a very fast typer.
And I think we're going to have to start utilizing other devices, whether that's smart glasses, you know, audio, earbuds, and eventually maybe some sorts of neural devices where we can increase the input and the output bandwidth to something, you know, maybe 100x of what is today.
What they really are is we're building models of very combinatorially high dimensional spaces.
Yes.
It's the sort of thing that I guess Steve Jobs always talked about.
It's simplicity, beauty, and elegance that we want.
And nobody's there yet, in my opinion.
And that's what I would like us to get to.
Again, it sort of speaks to Go again as a game, the most elegant, beautiful game.
Can you make an interface as beautiful as that?
And actually, I think we're going to enter an era of AI generated interfaces that are probably personalized to you.
So it fits the way that you, your aesthetic, your feel, the way that your brain works.
that if you try to brute force a solution, find the best move and go, or find the exact shape of a protein, and if you enumerated all the possibilities, there wouldn't be enough time in the time of the universe.
And the AI kind of generates that depending on the task, you know, that feels like that's probably the direction we'll end up in.
Yeah, well, so the way it works with our different version numbers is we, you know, we try to collect, so maybe it takes, you know, roughly six months or something to do a new kind of full run and the full productization of a new version.
And during that time, lots of new, interesting research iterations and ideas come up.
And we sort of collect them all together.
You know, you could imagine the last six months worth of interesting ideas on the architecture front.
Maybe it's on the data front.
It's like many different possible things.
And we collect, package that all up, test which ones are likely to be useful for the next iteration, and then bundle that all together.
And then we start the new, you know, giant hero training run.
Right.
And then of course that gets monitored.
And then at the end of the pre-training, then there's all the post-training.
There's many different ways of doing that, different ways of patching it.
So there's a whole experiment and phase there, which you can also get a lot of gains out.
And that's where you see the version numbers usually referring to the base model, the pre-trained model.
And then the interim versions of 2.5, you know, and the different sizes and the different little additions, they're often patches or post-training ideas that can be done afterwards.
So you have to do something much smarter.
off the same basic architecture.
And then of course, on top of that, we also have different sizes, pro and flash and flashlight that are often distilled from the biggest ones, you know, the flash model from the pro model.
And that means we have a range of different choices.
And what we did in both cases was build models of those environments.
If you are the developer of, do you want to prioritize performance or speed, right?
And cost.
And we like to think of this Pareto frontier of, on the one hand, the y-axis is like performance, and then the x-axis is cost or latency and speed, basically.
And we have models that completely define the frontier.
So whatever your trade-off is that you want as an individual user or as a developer, you should find one of our models satisfies that constraint.
And that guided the search in a smart way.
Exactly.
And then you're constantly this process of converging upstream.
We call it, you know, ideas from the, from the product surfaces or, or, or from the post-training and, and even further downstream than that, you, you kind of upstream that into the, the core model training for the next run.
And that makes it tractable.
So then the main model, the main Gemini track becomes more and more general.
So if you think about protein folding, which is obviously a natural system, you know, why should that be possible?
And eventually, you know, AGI.
One hero run at a time.
Yes, exactly.
A few hero runs later.
You need them, but it's important that you don't overfit to them, right?
So there shouldn't be the be-all and end-all.
So there's LM Arena, or it used to be called Elemsys, that turned out sort of organically to be one of the main ways people like to test these systems, at least the chatbots.
Obviously, there's loads of academic benchmarks from the test mathematics and coding ability, general language ability, science ability, and so on.
And then we have our own internal benchmarks that we care about.
It's a kind of multi-objective optimization problem, right?
You don't want to be good at just one thing.
We're trying to build general systems that are good across the board.
And you try and make no regret improvements.
How does physics do that?
So where you improve in like, you know, coding, but it doesn't reduce your performance in other areas, right?
So that's the hard part because you can, of course, you could put more coding data in or you could put more, I don't know, gaming data in, but then does it make worse your language system?
You know, proteins fold in milliseconds in our bodies.
or in your translation systems and other things that you care about.
So you've got to kind of continually monitor this increasingly larger and larger suite of benchmarks.
And also when you stick them into products, these models, you also care about the direct usage and the direct stats and the signals that you're getting from the end users, whether they're coders or the average person using the chat interfaces.
So somehow physics solves this problem that we've now also solved computationally.
And I think the reason that's possible is that in nature, natural systems have structure because they were subject to evolutionary processes that shape them.
Yeah.
And then other things that are even more esoteric come into play like, you know, the style of the persona of the system.
You know, how it, you know, is it verbose?
Is it succinct?
Is it humorous?
You know, and different people like different things.
Yeah.
You know, it's very interesting.
It's almost like cutting edge part of psychology research or personality research.
You know, I used to do that in my PhD, like five factor personality.
What do we actually want our systems to be like?
And different people will like different things as well.
So these are all just sort of new problems in product space that I don't think have ever really been tackled before, but we're going to sort of rapidly have to deal with now.
Yeah, exactly.
Well, I don't see it as sort of winning.
I mean, I think winning is the wrong way to look at it, given how important and consequential what it is we're building.
So funnily enough, I try not to view it like a game or competition, even though that's a lot of my mindset.
It's about, in my view, all of us, those of us at the leading edge, have a responsibility to steward this unbelievable technology that could be used for incredible good, but also has risks, steward it safely into the world for the benefit of humanity.
And if that's true, then you can maybe learn what that structure is.
That's always what I've dreamed about and what we've always tried to do.
And I hope that's what eventually the community, maybe the international community, will rally around when it becomes obvious that as we get closer and closer to AGI that that's what's needed.
It's been okay so far.
I try to pride myself in being collaborative.
I'm a collaborative person.
Research is a collaborative endeavor.
Science is a collaborative endeavor, right?
It's all good for humanity in the end if you cure terrible diseases and you come up with an incredible cure.
This is net win for humanity.
And the same with energy, all of the things that I'm interested in helping solve with AI.
So I just want that technology to exist in the world and be used for the right things.
And the kind of the benefits of that, the productivity benefits of that being shared for the benefit of everyone.
So I try to maintain good relations with all the leading lab people.
They have very interesting characters, many of them, as you might expect.
But yeah, I'm on good terms, I hope, with pretty much all of them.
And I think that's going to be important when things get even more serious than they are now.
that there are those communication channels and that's what will facilitate cooperation or collaboration if that's what is required, especially on things like safety.
Yeah, that would be awesome.
And we've talked about that in the past and it may be a cool thing that, that, you know, we can do.
And I agree with you, it'd be nice to have, um, kind of side projects in a way where, where one can just lean into the collaboration aspect of it.
And it's a sort of, uh, win-win for both sides.
And it's, um, and it kind of builds up that, that, that, uh, collaborative muscle.
I agree.
And I would love to see a lot of people, a lot of the other labs talk about science, but I think we're really the only ones using it for science and doing that.
And that's why projects like AlphaFold are so important to me.
And I think to our mission is to show how AI can...
be clearly used in a very concrete way for the benefit of humanity.
And also we spun out companies like Isomorphic off the back of AlphaFold to do drug discovery.
And it's going really well and build sort of, you know, you can think of build additional AlphaFold type systems to go into chemistry space to help accelerate drug design.
And the examples I think we need to show and society needs to understand are where AI can bring these huge benefits.
Yeah, I sometimes call it survival of the stabilist or something like that because, of course, there's evolution for life, living things.
Yeah, well, look, of course, you know, there's a strategy that Meta is taking right now.
I think that from my perspective, at least, I think the people that are real believers in the mission of AGI and what it can do and understand the real consequences, both good and bad from that and what that responsibility entails.
I think they're mostly doing it to be like myself, to be on the frontier of that research.
So, you know, they can help influence the way that goes and steward that technology safely into the world.
And, you know, Meta right now are not at the frontier.
Maybe they'll manage to get back on there.
And, you know, it's probably rational what they're doing from their perspective because they're behind and they need to do something.
But there's also, if you think about geological time, so the shape of mountains, that's been shaped by weathering processes.
But I think there's more important things than just money.
Of course, one has to pay, you know, people their market rates and all of these things.
And that continues to go up.
And I was expecting this because more and more people are finally realizing, leaders of companies, what I've always known for 30 plus years now, which is that AGI is the most important technology probably that's ever going to be invented.
So in some senses, it's rational to be doing that.
But I also think there's a much bigger question.
I mean, people in AI these days are very well paid.
I remember when we were starting out back in 2010, I didn't even pay myself for a couple of years.
because it wasn't enough money.
We couldn't raise any money.
And these days, interns are being paid the amount that we raised as our first entire seed round.
So it's pretty funny.
And I remember the days where I used to have to work for free and almost pay my own way to do an internship, right?
Now it's all the other way around.
But that's just how it is.
It's the new world.
But I think that we've been discussing what happens post-AGI and energy systems are solved and so on.
What is even money going to mean?
So I think, you know, in the economy and we're going to have much bigger issues to work through and how does the economy function in that world and companies.
right, over thousands of years.
So I think, you know, it's a little bit of a side issue about salaries and things like that today.
But then you can even take it cosmological, the orbits of planets, the shapes of asteroids.
Well, it's interesting that programming, and it's again counterintuitive to what we thought years ago, maybe that some of the skills that we think of as harder skills are turned out maybe to be the easier ones for various reasons.
But, you know, coding and math, because you can create a lot of synthetic data and verify if that data is correct.
So because of that nature of that, it's easier to make things like synthetic data to train from.
These have all been survived kind of processes that have acted on them many, many times.
Um, it's also an area, of course, we're all interested in because as programmers, right, to help us and get faster at it and more productive.
So I think the, for the next era, like the next five, 10 years, I think what we're going to find is people who are kind of embrace these technologies become almost at one with them.
Um, whether that's in the creative industries or the technical industries will become sort of superhumanly productive, um,
I think.
So the great programmers will be even better, but there'll be even 10x even what they are today.
And because there you'll be able to use their skills to utilize the tools to the maximum, exploit them to the maximum.
And so I think that's what we're going to see in the next domain.
So that's going to cause quite a lot of change, right?
And so that's coming.
A lot of people benefit from that.
So I think one example of that is if coding becomes easier, it becomes available to many more creatives to do more.
But I think the top programmers will still have huge advantages in terms of specifying things.
So if that's true, then there should be some sort of pattern that you can kind of reverse learn and a kind of manifold really that helps you search to the right solution, to the right shape, and actually allow you to predict things about it in an efficient way, because it's not a random pattern.
going back to specifying what the architecture should be, the question should be how to guide these coding assistants in a way that's useful, check whether the code they produce is good.
So I think there's plenty of headroom there for the foreseeable next few years.
Yeah, I think that's right.
Any time where there's a lot of disruption and change, you know, and we've had this, it's not just this time, we've had this many times in human history with the internet, mobile, but before that was the industrial revolution.
And it's going to be one of those eras where there will be a lot of change.
I think there'll be new jobs we can't even imagine today, just like the internet created.
And then those people with the right skill sets to ride that wave will become incredibly valuable, right, those skills.
But maybe people will have to relearn or adapt a bit their current skills.
And it's the thing that's going to be harder to deal with this time around is that I think what we're going to see is something like probably 10 times the impact the Industrial Revolution had, but 10 times faster as well.
So instead of 100 years, it takes 10 years.
And so that's going to make it, it's like 100x the impact and the speed combined.
So that's what's, I think, going to make it more difficult for society to deal with.
And there's a lot to think through.
And I think we need to be discussing that right now.
And I encourage top economists in the world and philosophers to start thinking about the
how is society going to be affected by this and what should we do, including things like universal basic provision or something like that, where a lot of the increased productivity gets shared out and distributed to society, and maybe in the form of services and other things, where if you want more than that, you still go and get some incredibly rare skills and things like that and make yourself unique.
Right.
So it may not be possible for man-made things or abstract things like factorizing large numbers, because unless there's patterns in the number space, which there might be, but if there's not and it's uniform, then there's no pattern to learn.
But
But there's a basic provision that is provided.
Definitely.
And I think I think we'll need new governance structures, institutions probably to help with this transition.
So I think political philosophy and political science is going to be key to that.
But I think the number one thing, first of all, is to create more abundance of resources.
So that's the number one thing, increase productivity, get more resources, maybe eventually get out of the zero-sum situation.
Then the second question is how to use those resources and distribute those resources.
But yeah, you can't do that without having that abundance first.
There's no model to learn that will help you search.
You have to do brute force.
So in that case, you know, you maybe need a quantum computer, something like this.
But in most things in nature that we're interested in are not like that.
Well, that would be an amazing experience.
You know, he's a fantastic mind.
And I also love the way he spent a lot of his time at Princeton at the Institute of Advanced Studies, a very special place for thinking.
And it's amazing how much of a polymath he was and the spread of things he helped invent, including, of course, the von Neumann architecture that all the modern computers are based on.
And he had amazing foresight.
I think he would have loved where we are today.
And he would have, I think he would have really enjoyed AlphaGo being a game thinker.
They have structure that evolved for a reason and survived over time.
He also did game theory.
I think he foresaw a lot of what would happen with learning machine systems that are kind of grown, I think he called it, rather than programmed.
I'm not sure how even, maybe he wouldn't even be that surprised.
There's the fruition of what I think he already foresaw in the 1950s.
I wonder what advice he would give.
Yeah, I'm sure.
I'm sure there is.
I mean, we, you know, study.
I read a lot of books for that time as well.
Chronicle time and some brilliant people involved.
And if that's true, I think that's potentially learnable by a neural network.
I agree with you.
I think maybe there needs to be more dialogue and understanding there.
I hope we can learn from those times.
I think the difference here is that the AI has so many, it's a multi-use technology.
Obviously, we're trying to do things like solve all diseases, help with energy and scarcity, these incredible things.
This is why all of us and myself, I started on this journey 30 plus years ago.
But of course, there are risks too.
And probably von Neumann, my guess is he foresaw both.
And I think he sort of said, I think to his wife, that computers would be even more impactful in the world.
And as we just discussed, you know, I think that's right.
I think it's going to be 10 times at least of the Industrial Revolution.
So I think he's right.
So I think he would have been, I imagine, fascinated by where we are now.
I agree with that.
I think we need to approach it with whatever you want to call it, a spiritual dimension or humanist dimension.
It doesn't have to be to do with religion, but this idea of a soul, what makes us human, this spark that we have, perhaps it's to do with consciousness when we finally understand that.
I think that has to be at the heart of the endeavor.
And technology, I've always seen technology as the enabler, right?
The tools that enable us to flourish and to understand more about the world.
And I'm sort of with Feynman on this, and he used to always talk about science and art being companions, right?
You can understand it from both sides, the beauty of a flower, how beautiful it is, and also understand why the colors of the flower evolved like that.
That just makes it more beautiful, just the intrinsic beauty of the flower.
And I've always sort of seen it like that.
And maybe, you know, in the Renaissance times, the great discoverers then, like people like Da Vinci, you know, I don't think he saw any difference between science and art and perhaps religion, right?
Yes, right.
Everything is just part of being human.
and being inspired about the world around us.
Yeah.
That's the philosophy I try to take.
One of my favorite philosophers is Spinoza.
I think he combined that all very well, this idea of trying to understand the universe and understanding our place in it.
So they can be efficiently rediscovered or recovered because nature is not random, right?
That was his way of understanding religion.
I think that's quite beautiful.
For me, all of these things are related, interrelated, the technology and what it means to be human.
And I think it's very important though that we remember that as when we're immersed in the technology and the research.
I think a lot of researchers that I see in our field are a little bit too narrow and only understand the technology.
And I think also that's why it's important for this to be debated by society at large.
And I'm very supportive of things like the AI summits that will happen and governments understanding it.
And I think that's one good thing about the chatbot era and the product era of AI is that everyday person can actually feel and interact with cutting edge AI and feel it for themselves.
Everything that we see around us, including like the elements that are more stable, all of those things, they're subject to some kind of selection process, pressure.
Yeah, be able to adapt.
Society will be able to adapt to these technologies, like we've always done in the past with the incredible technologies we've invented in the past.
I hope not.
I think that would be very dangerous to do.
And I think also, you know, not the right use of the technology.
I hope we'll end up with something more collaborative if needed, like more like a CERN project.
you know, where it's research focused and the best minds in the world come together to carefully complete the final steps and make sure it's responsibly done before, you know, like deploying it to the world.
We'll see.
I mean, it's difficult with the current geopolitical climate, I think, to see cooperation, but things can change.
And
Um, I think at least on the scientific level, it's important for the researchers to, to, to, to keep in touch and, and, and keep close to each other on at least on those kinds of topics.
Yeah, science has always been, I think, quite a very collaborative endeavor.
And, you know, scientists know that it's a collective endeavor as well, and we can all learn from each other.
So perhaps it could be a vector to get a bit of cooperation.
Well, look, I don't have a PDoom number.
The reason I don't is because I think it would imply a level of precision that is not there.
So I don't know how people are getting their PDoom numbers.
I think it's a kind of a little bit of a ridiculous notion because what I would say is it's definitely non-zero and it's probably non-negligible.
So that in itself is pretty sobering.
And my view is it's just hugely uncertain, right?
What these technologies are going to be able to do, how fast are they going to take off, how controllable they're going to be.
Some things may turn out to be, and hopefully, like way easier than we thought, right?
But it may be there's some really hard problems that are harder than we guessed today.
And I think we don't know that for sure.
And so under those conditions of a lot of uncertainty, but huge stakes both ways.
On the one hand, we could solve all diseases, energy problems, the scarcity problem, and then travel to the stars and consciousness of the stars and maximum human flourishing.
On the other hand is this sort of P doom scenarios.
So given the uncertainty around it and the importance of it, it's clear to me the only rational, sensible approach is to proceed with cautious optimism.
So we want the benefits, of course, and all of the amazing things that AI can bring.
And actually, I would be really worried for humanity if given the other challenges that we have, climate, aging, resources, all of that, if I didn't know something like AI was coming down the line, right?
How would we solve all those other problems?
I think it's hard.
So I think it could be amazingly transformative for good.
But on the other hand, you know, there are these risks that we know are there, but we can't quite quantify.
So the best thing to do is to use the scientific method to do more research to try and more precisely define those risks and, of course, address them.
And I think that's what we're doing.
I think there probably needs to be 10 times more effort of that than there is now as we're getting closer and closer to the AGI line.
And then I think they're, they're, they operate over different timescales and they're equally important to address.
So there's just the, the, the, the common garden or variety of like, you know, bad actors using new technology, uh, in this case, general purpose technology and repurposing it for harmful ends.
And that's a huge risk.
And I think that has a lot of complications because generally, you know, I'm in huge favor of open science and open source.
And in fact, we did it with all our science projects like AlphaFold and all of those things for the benefit of the scientific community.
But how does one restrict bad actors?
access to these powerful systems, whether they're individuals or even rogue states, but enable access at the same time to good actors to maximally build on top of.
It's a pretty tricky problem that I've not heard a clear solution to.
So there's the bad actor use case problem.
And then there's obviously as the systems become more agentic and closer to AGI and more autonomous, how do we ensure the guardrails and they stick to what we want them to do and under our control?
Yeah.
I mean, I've always been fascinated by the P equals NP question and what is modulable by classical systems, i.e.
Yeah, it's a hard problem.
I mean, look, we can maybe also use the technology itself to help early warning on some of the bad actor use cases, right?
Whether that's bio or nuclear or whatever it is, like AI could be potentially helpful there as long as the AI that you're using is itself reliable, right?
So it's a sort of interlocking problem and that's what makes it very tricky
And again, it may require some agreement internationally, at least between China and the US of some basic standards, right?
Yeah, it's a special moment.
non-quantum systems, you know, Turing machines in effect.
And, you know, it was great for Lisa Doll.
And, you know, I think it's in a way they were sort of inspiring each other.
We as a team were inspired by Lisa Doll's brilliance and nobleness.
And then maybe he got inspired by, you know, what AlphaGo was doing to then conjure this incredible inspirational moment.
It's all captured very well in the documentary about it.
And I think that will continue in many domains where there's this, at least again, for the foreseeable future of the humans bringing in their ingenuity and asking the right question, let's say, and then utilizing these tools in a way that then cracks a problem.
And that's exactly what I'm working on actually in kind of my few moments of spare time with a few colleagues about should there be, you know, maybe a new class of problem that is solvable by this type of neural network process and kind of mapped onto these natural systems.
I think that's what I've always imagined when I was a kid and starting on this journey of like, I was, of course, fascinated by things like consciousness, did a neuroscience PhD to look at how the brain works, especially imagination and memory.
I focused on the hippocampus and it's sort of going to be interesting.
I always thought the best way, of course, one can philosophize about it and have thought experiments.
and maybe even do actual experiments like you do in neuroscience on real brains.
But in the end, I always imagine that building AI, a kind of intelligent artifact, and then comparing that to the human mind and seeing what the differences were would be the best way to uncover what's special about the human mind.
if indeed there is anything special.
And I suspect there probably is, but it's going to be hard to, you know, I think this journey we're on will help us understand that and define that.
And, you know, there may be a difference between carbon-based substrates that we are and silicon ones when they process information.
You know, one of the best definitions I like of consciousness is it's the way information feels when we process it, right?
Yeah.
It could be.
I mean, it's not a very helpful scientific explanation.
I think it's kind of interesting, intuitive one.
And so, you know, on this journey, this scientific journey we're on will, I think, help uncover that mystery.
So, you know, the things that exist in physics.
Well, look, Penrose is an amazing thinker, one of the greatest of the modern era.
And we've had a lot of discussions about this.
Of course, we cordially disagree, which is, you know, I feel like, I mean, he collaborated with a lot of good neuroscientists to see if he could find mechanisms for quantum mechanics behavior in the brain.
And to my knowledge, they haven't found anything convincing yet.
and have structure.
So my betting is that it is just classical computing that's going on in the brain, which suggests that all the phenomena are modelable or mimicable by a classical computer.
But we'll see.
There may be this final mysterious things of the feeling of consciousness, the qualia, these kinds of things that philosophers debate where it's unique to the substrate.
So I think that could be a very interesting new way of thinking about it.
We may even come towards understanding that if we do things like Neuralink and have neural interfaces to the AI systems, which I think we probably will eventually, maybe to keep up with the AI systems, we might actually be able to feel for ourselves what it's like to compute on silicon.
And maybe that will tell us.
So I think it's going to be interesting.
I had a debate once with the late Daniel Dennett about
Why do we think each other are conscious?
Okay, so it's for two reasons.
One is you're exhibiting the same behavior that I am.
So that's one thing.
And it sort of fits with the way I think about physics in general, which is that, you know, I think information is primary.
Behaviorally, you seem like a conscious being if I am.
But the second thing, which is often overlooked, is that we're running on the same substrate.
So if you're behaving in the same way and we're running on the same substrate, it's most parsimonious to assume you're feeling the same experience that I'm feeling.
But with an AI that's on silicon, we won't be able to rely on the second part, even if it exhibits the first part, that behavior looks like a behavior of a conscious being.
It might even claim it is.
But we wouldn't know how it actually felt.
And it probably couldn't know what we felt, at least in the first stages.
Maybe when we get to superintelligence and the technologies that builds, perhaps we'll be able to bridge that.
Right.
Exactly.
We never had to confront that before.
Information is the most sort of fundamental unit of the universe, more fundamental than energy and matter.
Yeah.
Well, for information to be computed, not on a carbon system.
I mean, we sort of, there are animal studies on this of like, of course, higher animals like, you know, killer whales and dolphins and dogs and monkeys, you know, they have some and elephants, you know, they have some aspects certainly of consciousness, right?
Even though they're not, might not be that smart on an IQ sense.
So we can already empathize with that.
And maybe even some of our systems one day, like we built this thing called Dolphin Gemma, you know, which can, a version of our system was trained on dolphin and whale sounds.
I think they can all be converted into each other.
And maybe we'll be able to build an interpreter or translator at some point.
It should be pretty cool.
Well, what gives me hope is I think our almost limitless ingenuity, first of all.
I think the best of us and the best human minds are incredible.
But I think of the universe as a kind of informational system.
And, you know, I love, you know, meeting and watching any human that's the top of their game, whether that's sport or science or art.
You know, it's just nothing more wonderful than that, seeing them in their element in flow.
I think it's almost limitless.
You know, our brains are...
general systems, intelligent systems.
So I think it's almost limitless what we can potentially do with them.
And then the other thing is our extreme adaptability.
I think it's going to be okay in terms of there's going to be a lot of change, but look where we are now without effectively our hunter-gatherer brains.
How is it we can, you know, we can cope with the modern world, right?
flying on planes, doing podcasts, you know, playing computer games and virtual simulations.
I mean, it's already mind-blowing given that our mind was developed for, you know, hunting buffaloes on the tundra.
And so I think this is just the next step.
And it's actually kind of interesting to see how society has already adapted to this mind-blowing AI technology we have today already.
It's sort of like, oh, I talk to chatbots.
Totally fine.
Not to the level that you can do it, Lex, I don't think.
All the things that are deeply human.
That's right.
Thank you very much, Lex.
Yeah, I think it's one of the most fundamental questions, actually, if you think of physics as informational.
And the answer to that, I think, is going to be very enlightening.
Yeah, I think that there are actually a huge class of problems that could be couched in this way, the way we did AlphaGo and the way we did AlphaFold, where you model what the dynamics of the system is, the properties of that system, the environment that you're trying to understand, and then that makes the search for the solution or the prediction of the next step efficient.
basically polynomial time, so tractable by a classical system, which a neural network is.
It runs on normal computers, classical computers, Turing machines in effect.
And I think it's one of the most interesting questions there is, is how far can that paradigm go?
I think we've proven, and the AI community in general, that classical systems, Turing machines, can go a lot further than we previously thought.
They can do things like model the structures of proteins and play Go to better than world champion level.
And a lot of people would have thought maybe 10, 20 years ago, that was decades away, or maybe you would need some sort of quantum machines to quantum systems to be able to do things like protein folding.
And so I think we haven't really even sort of scratched the surface yet of what classical systems so-called could do.
And of course, AGI being built on a neural network system, on top of a neural network system, on top of a classical computer would be the ultimate expression of that.
And I think the limit, you know, what the bounds of that kind of system, what it can do, it's a very interesting question and directly speaks to the P equals NP question.
And then, of course, in the end, it turned out 100 moves later, that move 37, the stone, the piece that was put down on the board, was in exactly the right place to be decisive for the whole game. So now it's studied as a great classic game. of the, of the go, you know, history of go that game and that move.
And then, of course, in the end, it turned out 100 moves later, that move 37, the stone, the piece that was put down on the board, was in exactly the right place to be decisive for the whole game. So now it's studied as a great classic game. of the, of the go, you know, history of go that game and that move.
And then, of course, in the end, it turned out 100 moves later, that move 37, the stone, the piece that was put down on the board, was in exactly the right place to be decisive for the whole game. So now it's studied as a great classic game. of the, of the go, you know, history of go that game and that move.
And of course then, and then even more exciting for that is that's exactly what we hoped these systems would do because, um, the whole point of me and my whole motivation, my whole life of working on AI was to use AI to accelerate scientific discovery. And it's those kinds of new innovations, albeit in a game is what we were looking for from our systems.
And of course then, and then even more exciting for that is that's exactly what we hoped these systems would do because, um, the whole point of me and my whole motivation, my whole life of working on AI was to use AI to accelerate scientific discovery. And it's those kinds of new innovations, albeit in a game is what we were looking for from our systems.
And of course then, and then even more exciting for that is that's exactly what we hoped these systems would do because, um, the whole point of me and my whole motivation, my whole life of working on AI was to use AI to accelerate scientific discovery. And it's those kinds of new innovations, albeit in a game is what we were looking for from our systems.
Yeah, well, look, I think... I think there'll be a lot of move 37s in almost every area of human endeavor. Of course, the thing I've been focusing on since then is mostly being, how can we apply those types of AI techniques, those learning techniques, those general learning techniques to science, big areas of science. I call them root node problems.
Yeah, well, look, I think... I think there'll be a lot of move 37s in almost every area of human endeavor. Of course, the thing I've been focusing on since then is mostly being, how can we apply those types of AI techniques, those learning techniques, those general learning techniques to science, big areas of science. I call them root node problems.
Yeah, well, look, I think... I think there'll be a lot of move 37s in almost every area of human endeavor. Of course, the thing I've been focusing on since then is mostly being, how can we apply those types of AI techniques, those learning techniques, those general learning techniques to science, big areas of science. I call them root node problems.
So problems where if you think of the tree of all knowledge that's out there in the universe, can you unlock some root nodes that unlock entire branches or new avenues of discovery that people can build on afterwards? Right. And, uh, for us, protein folding and alpha fold was one of those. He's always, you know, top of my list.
So problems where if you think of the tree of all knowledge that's out there in the universe, can you unlock some root nodes that unlock entire branches or new avenues of discovery that people can build on afterwards? Right. And, uh, for us, protein folding and alpha fold was one of those. He's always, you know, top of my list.
So problems where if you think of the tree of all knowledge that's out there in the universe, can you unlock some root nodes that unlock entire branches or new avenues of discovery that people can build on afterwards? Right. And, uh, for us, protein folding and alpha fold was one of those. He's always, you know, top of my list.
I have a kind of mental list of all these types of problems that I've come across throughout my life and, and just being generally interested in all areas of science and, um, and, and, and sort of thinking through which ones would be suitable, uh, would both be hugely impactful, um, but also suitable for these types of techniques.
I have a kind of mental list of all these types of problems that I've come across throughout my life and, and just being generally interested in all areas of science and, um, and, and, and sort of thinking through which ones would be suitable, uh, would both be hugely impactful, um, but also suitable for these types of techniques.
I have a kind of mental list of all these types of problems that I've come across throughout my life and, and just being generally interested in all areas of science and, um, and, and, and sort of thinking through which ones would be suitable, uh, would both be hugely impactful, um, but also suitable for these types of techniques.
And I think we're, you know, we're going to see a kind of new golden era of these types of new strategies, new ideas in very important areas of human endeavor. Now, I would say one thing to say, though, is that we haven't. fully cracked creativity yet, right? So I don't want to claim that.
And I think we're, you know, we're going to see a kind of new golden era of these types of new strategies, new ideas in very important areas of human endeavor. Now, I would say one thing to say, though, is that we haven't. fully cracked creativity yet, right? So I don't want to claim that.
And I think we're, you know, we're going to see a kind of new golden era of these types of new strategies, new ideas in very important areas of human endeavor. Now, I would say one thing to say, though, is that we haven't. fully cracked creativity yet, right? So I don't want to claim that.
I think that there are, you know, I often describe as three levels of creativity, and I think AI is capable of the first two. So first one would be interpolation. So you give it, you know, a million pictures of cats, an AI system, a million pictures of cats, and you say, create me a prototypical cat, and it will just like average all the million cats pictures that it's seen.
I think that there are, you know, I often describe as three levels of creativity, and I think AI is capable of the first two. So first one would be interpolation. So you give it, you know, a million pictures of cats, an AI system, a million pictures of cats, and you say, create me a prototypical cat, and it will just like average all the million cats pictures that it's seen.
I think that there are, you know, I often describe as three levels of creativity, and I think AI is capable of the first two. So first one would be interpolation. So you give it, you know, a million pictures of cats, an AI system, a million pictures of cats, and you say, create me a prototypical cat, and it will just like average all the million cats pictures that it's seen.
And that prototypical one won't be in the training set. So it will be a unique cat, But that's not very interesting from a creative point of view, right? It's just an averaging. But the second thing would be what I call extrapolation.
And that prototypical one won't be in the training set. So it will be a unique cat, But that's not very interesting from a creative point of view, right? It's just an averaging. But the second thing would be what I call extrapolation.
And that prototypical one won't be in the training set. So it will be a unique cat, But that's not very interesting from a creative point of view, right? It's just an averaging. But the second thing would be what I call extrapolation.
So that's more like AlphaGo, where you've played 10 million games of Go, you've looked at a few million human games of Go, but then you come up with, you extrapolate from what's known to a new strategy never seen before, like Move 37. Okay, so that's very valuable already. I think that is true creativity.
So that's more like AlphaGo, where you've played 10 million games of Go, you've looked at a few million human games of Go, but then you come up with, you extrapolate from what's known to a new strategy never seen before, like Move 37. Okay, so that's very valuable already. I think that is true creativity.
So that's more like AlphaGo, where you've played 10 million games of Go, you've looked at a few million human games of Go, but then you come up with, you extrapolate from what's known to a new strategy never seen before, like Move 37. Okay, so that's very valuable already. I think that is true creativity.
But then there's a third level, which I call it kind of invention or out of the box thinking, which is not only can you come up with a move 37, but could you have invented go, right? Or another measure I like to use is if we went back to time of Einstein in 1900, early 1900s, could an AI system actually come up with general relativity with the same information that Einstein had at the time?
But then there's a third level, which I call it kind of invention or out of the box thinking, which is not only can you come up with a move 37, but could you have invented go, right? Or another measure I like to use is if we went back to time of Einstein in 1900, early 1900s, could an AI system actually come up with general relativity with the same information that Einstein had at the time?
But then there's a third level, which I call it kind of invention or out of the box thinking, which is not only can you come up with a move 37, but could you have invented go, right? Or another measure I like to use is if we went back to time of Einstein in 1900, early 1900s, could an AI system actually come up with general relativity with the same information that Einstein had at the time?
And clearly today, the answer is no to those things, right? It can't invent a game as great as Go, and it wouldn't be able to invent general relativity just from the information that Einstein had at the time. And so there's still something missing from our systems to get true out-of-the-box thinking. But I think it will come, but we just don't have it yet.
And clearly today, the answer is no to those things, right? It can't invent a game as great as Go, and it wouldn't be able to invent general relativity just from the information that Einstein had at the time. And so there's still something missing from our systems to get true out-of-the-box thinking. But I think it will come, but we just don't have it yet.
And clearly today, the answer is no to those things, right? It can't invent a game as great as Go, and it wouldn't be able to invent general relativity just from the information that Einstein had at the time. And so there's still something missing from our systems to get true out-of-the-box thinking. But I think it will come, but we just don't have it yet.
Yeah, with AlphaGo, we sort of cracked the pinnacle of board games, right? So Go was always considered the Mount Everest, if you like, of games AI for board games. But there are even more complex games by some measures if you take on board the most complex strategy games that you can play on computers.
Yeah, with AlphaGo, we sort of cracked the pinnacle of board games, right? So Go was always considered the Mount Everest, if you like, of games AI for board games. But there are even more complex games by some measures if you take on board the most complex strategy games that you can play on computers.
Yeah, with AlphaGo, we sort of cracked the pinnacle of board games, right? So Go was always considered the Mount Everest, if you like, of games AI for board games. But there are even more complex games by some measures if you take on board the most complex strategy games that you can play on computers.
And Starcraft 2 is acknowledged to be the sort of classic of the genre of real time strategy games. And it's a very complex game. You've got to build up your base and your units and other things. So every game is different, right? And the board game is very fluid and you've got to move many units around in real time.
And Starcraft 2 is acknowledged to be the sort of classic of the genre of real time strategy games. And it's a very complex game. You've got to build up your base and your units and other things. So every game is different, right? And the board game is very fluid and you've got to move many units around in real time.
And Starcraft 2 is acknowledged to be the sort of classic of the genre of real time strategy games. And it's a very complex game. You've got to build up your base and your units and other things. So every game is different, right? And the board game is very fluid and you've got to move many units around in real time.
And the way we cracked that was to add this additional level in of a league of agents competing against each other. all seeded with slightly different initial strategies. And then you kind of get a sort of survival of the fittest. You have a tournament between them all. So it's a kind of multi-agent setup now.
And the way we cracked that was to add this additional level in of a league of agents competing against each other. all seeded with slightly different initial strategies. And then you kind of get a sort of survival of the fittest. You have a tournament between them all. So it's a kind of multi-agent setup now.
And the way we cracked that was to add this additional level in of a league of agents competing against each other. all seeded with slightly different initial strategies. And then you kind of get a sort of survival of the fittest. You have a tournament between them all. So it's a kind of multi-agent setup now.
And the strategies that win out in that tournament go to the next, you know, the next epoch. And then you generate some other new strategies around that. And you keep doing that for many generations. You're kind of both having this idea of self-play that we had in AlphaGo, but you're adding in this multi-agent competitive, almost evolutionary dynamic in there.
And the strategies that win out in that tournament go to the next, you know, the next epoch. And then you generate some other new strategies around that. And you keep doing that for many generations. You're kind of both having this idea of self-play that we had in AlphaGo, but you're adding in this multi-agent competitive, almost evolutionary dynamic in there.
And the strategies that win out in that tournament go to the next, you know, the next epoch. And then you generate some other new strategies around that. And you keep doing that for many generations. You're kind of both having this idea of self-play that we had in AlphaGo, but you're adding in this multi-agent competitive, almost evolutionary dynamic in there.
And then eventually you get an agent or a set of agents that are kind of the Nash distribution of agents. So no other strategy dominates them, but they dominate the most number of other strategies. And then you have this kind of Nash equilibrium, and then you pick out the top agents from that. And that succeeded very well with this type of very open-ended kind of gameplay.
And then eventually you get an agent or a set of agents that are kind of the Nash distribution of agents. So no other strategy dominates them, but they dominate the most number of other strategies. And then you have this kind of Nash equilibrium, and then you pick out the top agents from that. And that succeeded very well with this type of very open-ended kind of gameplay.
And then eventually you get an agent or a set of agents that are kind of the Nash distribution of agents. So no other strategy dominates them, but they dominate the most number of other strategies. And then you have this kind of Nash equilibrium, and then you pick out the top agents from that. And that succeeded very well with this type of very open-ended kind of gameplay.
So it's quite different from what you get with chess or Go, where the rules are very prescribed and the pieces that you get are always the same. And it's sort of a very ordered game. Something like StarCraft's much more chaotic. So it's sort of interesting to have to deal with that. It has hidden information too. You can't see the whole map at once. You have to explore it.
So it's quite different from what you get with chess or Go, where the rules are very prescribed and the pieces that you get are always the same. And it's sort of a very ordered game. Something like StarCraft's much more chaotic. So it's sort of interesting to have to deal with that. It has hidden information too. You can't see the whole map at once. You have to explore it.
So it's quite different from what you get with chess or Go, where the rules are very prescribed and the pieces that you get are always the same. And it's sort of a very ordered game. Something like StarCraft's much more chaotic. So it's sort of interesting to have to deal with that. It has hidden information too. You can't see the whole map at once. You have to explore it.
So it's not a perfect information game, which is another thing we wanted our systems to be able to cope with is partial information situations, which is actually more like the real world, right? Very rarely in the real world do you actually have full information about everything.
So it's not a perfect information game, which is another thing we wanted our systems to be able to cope with is partial information situations, which is actually more like the real world, right? Very rarely in the real world do you actually have full information about everything.
So it's not a perfect information game, which is another thing we wanted our systems to be able to cope with is partial information situations, which is actually more like the real world, right? Very rarely in the real world do you actually have full information about everything.
Usually you only have partial information and then you have to infer everything else in order to come up with the right strategies.
Usually you only have partial information and then you have to infer everything else in order to come up with the right strategies.
Usually you only have partial information and then you have to infer everything else in order to come up with the right strategies.
Well, look, I'm glad you brought up Homer Ludens and it's a wonderful book. And it basically argues that games playing is actually a fundamental part of being human, right? In many ways, that's the act of play. What could be more human than that, right? And then of course, it leads into creativity, fun, all of these things kind of get built on top of that.
Well, look, I'm glad you brought up Homer Ludens and it's a wonderful book. And it basically argues that games playing is actually a fundamental part of being human, right? In many ways, that's the act of play. What could be more human than that, right? And then of course, it leads into creativity, fun, all of these things kind of get built on top of that.
Well, look, I'm glad you brought up Homer Ludens and it's a wonderful book. And it basically argues that games playing is actually a fundamental part of being human, right? In many ways, that's the act of play. What could be more human than that, right? And then of course, it leads into creativity, fun, all of these things kind of get built on top of that.
I've always loved them as a way to practice and train your own mind in situations that you might only ever get a handful of times in real life, but they're usually very critical. What company to start, what deal to make, things like that. I think games is a way to... practice those scenarios.
I've always loved them as a way to practice and train your own mind in situations that you might only ever get a handful of times in real life, but they're usually very critical. What company to start, what deal to make, things like that. I think games is a way to... practice those scenarios.
I've always loved them as a way to practice and train your own mind in situations that you might only ever get a handful of times in real life, but they're usually very critical. What company to start, what deal to make, things like that. I think games is a way to... practice those scenarios.
And if you take games seriously, then you can actually simulate a lot of the pressures one would have in decision-making situations. And going back to earlier, that's why I think chess is such a great training ground for kids to learn because it does teach them about all of these situations. And so, of course, it's the same for AI systems too.
And if you take games seriously, then you can actually simulate a lot of the pressures one would have in decision-making situations. And going back to earlier, that's why I think chess is such a great training ground for kids to learn because it does teach them about all of these situations. And so, of course, it's the same for AI systems too.
And if you take games seriously, then you can actually simulate a lot of the pressures one would have in decision-making situations. And going back to earlier, that's why I think chess is such a great training ground for kids to learn because it does teach them about all of these situations. And so, of course, it's the same for AI systems too.
There was the perfect proving ground for our early AI system ideas, partly because they were invented to be challenging and fun for humans to play. And of course, there's different levels of gameplay. So we could start with very simple games like Atari games and then go all the way up to the most complex computer games like StarCraft, right? And continue to sort of challenge our system.
There was the perfect proving ground for our early AI system ideas, partly because they were invented to be challenging and fun for humans to play. And of course, there's different levels of gameplay. So we could start with very simple games like Atari games and then go all the way up to the most complex computer games like StarCraft, right? And continue to sort of challenge our system.
There was the perfect proving ground for our early AI system ideas, partly because they were invented to be challenging and fun for humans to play. And of course, there's different levels of gameplay. So we could start with very simple games like Atari games and then go all the way up to the most complex computer games like StarCraft, right? And continue to sort of challenge our system.
So we were in the sweet spot of the S-curve. So it's not too easy, it's trivial or too hard. You can't even see if you're making any progress. You want to be in that maximum sort of part of the S-curve where you're making almost exponential progress. And we could keep picking harder and harder games as our systems got improved.
So we were in the sweet spot of the S-curve. So it's not too easy, it's trivial or too hard. You can't even see if you're making any progress. You want to be in that maximum sort of part of the S-curve where you're making almost exponential progress. And we could keep picking harder and harder games as our systems got improved.
So we were in the sweet spot of the S-curve. So it's not too easy, it's trivial or too hard. You can't even see if you're making any progress. You want to be in that maximum sort of part of the S-curve where you're making almost exponential progress. And we could keep picking harder and harder games as our systems got improved.
And then the other nice feature about games is because they're some kind of microcosm of the real world, they've usually been boiled down to very clear objective functions, right? So winning the game or maximizing the score is usually the objective of a game. And that's very easy to specify to a reinforcement learning system or an agent-based system. So it's perfect for hill climbing against.
And then the other nice feature about games is because they're some kind of microcosm of the real world, they've usually been boiled down to very clear objective functions, right? So winning the game or maximizing the score is usually the objective of a game. And that's very easy to specify to a reinforcement learning system or an agent-based system. So it's perfect for hill climbing against.
And then the other nice feature about games is because they're some kind of microcosm of the real world, they've usually been boiled down to very clear objective functions, right? So winning the game or maximizing the score is usually the objective of a game. And that's very easy to specify to a reinforcement learning system or an agent-based system. So it's perfect for hill climbing against.
and measuring ELO scores, ratings, and exactly where you are. And then finally, of course, you can calibrate yourselves against the best human players. So you can sort of calibrate what your agents are doing in their own tournaments.
and measuring ELO scores, ratings, and exactly where you are. And then finally, of course, you can calibrate yourselves against the best human players. So you can sort of calibrate what your agents are doing in their own tournaments.
and measuring ELO scores, ratings, and exactly where you are. And then finally, of course, you can calibrate yourselves against the best human players. So you can sort of calibrate what your agents are doing in their own tournaments.
In the end, even with the StarCraft agent, we had to eventually challenge a professional grandmaster at StarCraft to make sure that our systems hadn't overfitted somehow to their own tournament strategies. It actually needed to be, oh, we grounded it with, oh, it can actually be a genuine human grandmaster StarCraft player.
In the end, even with the StarCraft agent, we had to eventually challenge a professional grandmaster at StarCraft to make sure that our systems hadn't overfitted somehow to their own tournament strategies. It actually needed to be, oh, we grounded it with, oh, it can actually be a genuine human grandmaster StarCraft player.
In the end, even with the StarCraft agent, we had to eventually challenge a professional grandmaster at StarCraft to make sure that our systems hadn't overfitted somehow to their own tournament strategies. It actually needed to be, oh, we grounded it with, oh, it can actually be a genuine human grandmaster StarCraft player.
The final thing is, of course, you can generate as much synthetic data as you want with games too, which is coming into vogue right now, again, about data limitations and with large language models and how many tokens left in the world and has it read everything in the world. Obviously, for things like games, you can actually just play the system against itself and generate lots more
The final thing is, of course, you can generate as much synthetic data as you want with games too, which is coming into vogue right now, again, about data limitations and with large language models and how many tokens left in the world and has it read everything in the world. Obviously, for things like games, you can actually just play the system against itself and generate lots more
The final thing is, of course, you can generate as much synthetic data as you want with games too, which is coming into vogue right now, again, about data limitations and with large language models and how many tokens left in the world and has it read everything in the world. Obviously, for things like games, you can actually just play the system against itself and generate lots more
data from the right distribution.
data from the right distribution.
data from the right distribution.
Well, I've always been a huge proponent of simulations and AI. And it's also interesting to think about what the real world is in terms of a computational system. And so I've always been involved with trying to build very realistic simulations of things. And now, of course, that interacts with AI because you can have an AI that learns a simulator of some real world system
Well, I've always been a huge proponent of simulations and AI. And it's also interesting to think about what the real world is in terms of a computational system. And so I've always been involved with trying to build very realistic simulations of things. And now, of course, that interacts with AI because you can have an AI that learns a simulator of some real world system
Well, I've always been a huge proponent of simulations and AI. And it's also interesting to think about what the real world is in terms of a computational system. And so I've always been involved with trying to build very realistic simulations of things. And now, of course, that interacts with AI because you can have an AI that learns a simulator of some real world system
uh uh just by observing uh that system or all the data from that system so i think um the current debate is to do with uh these large foundation models um now pretty much use the whole internet right and and so then once you've tried to learn from those what's left right that's all the language that's out there of course there's other modalities like video and audio i don't think we've exhausted all of that kind of multimodal uh tokens but even that will reach some limit
uh uh just by observing uh that system or all the data from that system so i think um the current debate is to do with uh these large foundation models um now pretty much use the whole internet right and and so then once you've tried to learn from those what's left right that's all the language that's out there of course there's other modalities like video and audio i don't think we've exhausted all of that kind of multimodal uh tokens but even that will reach some limit
uh uh just by observing uh that system or all the data from that system so i think um the current debate is to do with uh these large foundation models um now pretty much use the whole internet right and and so then once you've tried to learn from those what's left right that's all the language that's out there of course there's other modalities like video and audio i don't think we've exhausted all of that kind of multimodal uh tokens but even that will reach some limit
So then the question comes of like, can you generate synthetic data? And I think that's why you're seeing quite a lot of progress with maths and coding, because in those domains, it's quite easy to generate synthetic data. The problem with synthetic data is, are you creating data that is from the right distribution, the actual distribution, right? Does it mimic the kind of real distribution?
So then the question comes of like, can you generate synthetic data? And I think that's why you're seeing quite a lot of progress with maths and coding, because in those domains, it's quite easy to generate synthetic data. The problem with synthetic data is, are you creating data that is from the right distribution, the actual distribution, right? Does it mimic the kind of real distribution?
So then the question comes of like, can you generate synthetic data? And I think that's why you're seeing quite a lot of progress with maths and coding, because in those domains, it's quite easy to generate synthetic data. The problem with synthetic data is, are you creating data that is from the right distribution, the actual distribution, right? Does it mimic the kind of real distribution?
And also, are you generating data that's correct, right? And of course, for things like maths, for coding and for things like gaming, you can actually test the final data and verify if it's correct, right? Before you feed it in as input into the training data for a new system.
And also, are you generating data that's correct, right? And of course, for things like maths, for coding and for things like gaming, you can actually test the final data and verify if it's correct, right? Before you feed it in as input into the training data for a new system.
And also, are you generating data that's correct, right? And of course, for things like maths, for coding and for things like gaming, you can actually test the final data and verify if it's correct, right? Before you feed it in as input into the training data for a new system.
So it's very amenable, certain areas, in fact, turns out the more abstract areas of human thinking that you can verify and prove that it's correct. And so therefore that unlocks the sort of ability to create a lot of synthetic data.
So it's very amenable, certain areas, in fact, turns out the more abstract areas of human thinking that you can verify and prove that it's correct. And so therefore that unlocks the sort of ability to create a lot of synthetic data.
So it's very amenable, certain areas, in fact, turns out the more abstract areas of human thinking that you can verify and prove that it's correct. And so therefore that unlocks the sort of ability to create a lot of synthetic data.
Yeah. Well, interestingly, if we talked about this earlier, Five years ago, or certainly 10 years ago, I would have said that some real world experience, maybe through robotics, usually when we talk about embodied intelligence, we're meaning robotics, but it could also be a very accurate simulator, right? Like some kind of ultra realistic game environment.
Yeah. Well, interestingly, if we talked about this earlier, Five years ago, or certainly 10 years ago, I would have said that some real world experience, maybe through robotics, usually when we talk about embodied intelligence, we're meaning robotics, but it could also be a very accurate simulator, right? Like some kind of ultra realistic game environment.
Yeah. Well, interestingly, if we talked about this earlier, Five years ago, or certainly 10 years ago, I would have said that some real world experience, maybe through robotics, usually when we talk about embodied intelligence, we're meaning robotics, but it could also be a very accurate simulator, right? Like some kind of ultra realistic game environment.
would be needed to fully understand, say, the physics of the world around you and the physical context around you. And there's actually a whole branch of neuroscience that is predicated on this. It's called action in perception. So this is the idea that one can't actually fully perceive the world unless you can also act in it.
would be needed to fully understand, say, the physics of the world around you and the physical context around you. And there's actually a whole branch of neuroscience that is predicated on this. It's called action in perception. So this is the idea that one can't actually fully perceive the world unless you can also act in it.
would be needed to fully understand, say, the physics of the world around you and the physical context around you. And there's actually a whole branch of neuroscience that is predicated on this. It's called action in perception. So this is the idea that one can't actually fully perceive the world unless you can also act in it.
And the kinds of arguments go is like, how can you really understand the concept of the weight of something, for example, unless you can pick things up?
And the kinds of arguments go is like, how can you really understand the concept of the weight of something, for example, unless you can pick things up?
And the kinds of arguments go is like, how can you really understand the concept of the weight of something, for example, unless you can pick things up?
and and and sort of compare them with each other and then you get this sort of idea of weight like can you actually you know can you really get that notion just by looking at things it seems seems hard right certainly for for humans like i think you need to act in the world so this is idea that acting in the world is part of your learning you're kind of like an active learner and in fact reinforcement learning is like that because the decisions you make
and and and sort of compare them with each other and then you get this sort of idea of weight like can you actually you know can you really get that notion just by looking at things it seems seems hard right certainly for for humans like i think you need to act in the world so this is idea that acting in the world is part of your learning you're kind of like an active learner and in fact reinforcement learning is like that because the decisions you make
and and and sort of compare them with each other and then you get this sort of idea of weight like can you actually you know can you really get that notion just by looking at things it seems seems hard right certainly for for humans like i think you need to act in the world so this is idea that acting in the world is part of your learning you're kind of like an active learner and in fact reinforcement learning is like that because the decisions you make
give you new experiences, but those experiences depend on the actions you took, but also those are the experiences that you'll then subsequently learn from. So in a sense, reinforcement learning systems are involved in their own learning process, right? Because they're active learners. And I think you can make a good argument that that's also required in the physical world.
give you new experiences, but those experiences depend on the actions you took, but also those are the experiences that you'll then subsequently learn from. So in a sense, reinforcement learning systems are involved in their own learning process, right? Because they're active learners. And I think you can make a good argument that that's also required in the physical world.
give you new experiences, but those experiences depend on the actions you took, but also those are the experiences that you'll then subsequently learn from. So in a sense, reinforcement learning systems are involved in their own learning process, right? Because they're active learners. And I think you can make a good argument that that's also required in the physical world.
Now, if it turns out, I'm not sure I believe that anymore because now with our systems, especially our video models, if you've seen VO2, our latest video models, completely state of the art, which we released late last year. And it...
Now, if it turns out, I'm not sure I believe that anymore because now with our systems, especially our video models, if you've seen VO2, our latest video models, completely state of the art, which we released late last year. And it...
Now, if it turns out, I'm not sure I believe that anymore because now with our systems, especially our video models, if you've seen VO2, our latest video models, completely state of the art, which we released late last year. And it...
It kind of shocked even me that even though we're building this thing, that it can sort of basically by watching a lot of YouTube videos, it can figure out the physics of the world. There's a sort of funny Turing test of, in some sense, Turing test in verb commas of video models, which is, can you chop a tomato?
It kind of shocked even me that even though we're building this thing, that it can sort of basically by watching a lot of YouTube videos, it can figure out the physics of the world. There's a sort of funny Turing test of, in some sense, Turing test in verb commas of video models, which is, can you chop a tomato?
It kind of shocked even me that even though we're building this thing, that it can sort of basically by watching a lot of YouTube videos, it can figure out the physics of the world. There's a sort of funny Turing test of, in some sense, Turing test in verb commas of video models, which is, can you chop a tomato?
Can you show a video of, you know, a knife chopping a tomato with the fingers and everything in the right place? And the tomato doesn't, you know, magically spring back together or the knife goes through the tomato without cutting it, et cetera. And VO can do it.
Can you show a video of, you know, a knife chopping a tomato with the fingers and everything in the right place? And the tomato doesn't, you know, magically spring back together or the knife goes through the tomato without cutting it, et cetera. And VO can do it.
Can you show a video of, you know, a knife chopping a tomato with the fingers and everything in the right place? And the tomato doesn't, you know, magically spring back together or the knife goes through the tomato without cutting it, et cetera. And VO can do it.
And if you think through the complexity of the physics, you know, to understand this, you know, you've got to what you've got to keep consistent and so on. It's pretty amazing. It's hard to argue that it doesn't understand something about physics and the physics of the world. And it's done it without acting in the world and certainly not acting as a robot in the world.
And if you think through the complexity of the physics, you know, to understand this, you know, you've got to what you've got to keep consistent and so on. It's pretty amazing. It's hard to argue that it doesn't understand something about physics and the physics of the world. And it's done it without acting in the world and certainly not acting as a robot in the world.
And if you think through the complexity of the physics, you know, to understand this, you know, you've got to what you've got to keep consistent and so on. It's pretty amazing. It's hard to argue that it doesn't understand something about physics and the physics of the world. And it's done it without acting in the world and certainly not acting as a robot in the world.
Now, it's not clear to me there is a limit now with just sort of passive perception.
Now, it's not clear to me there is a limit now with just sort of passive perception.
Now, it's not clear to me there is a limit now with just sort of passive perception.
Now, the interesting thing is that I think this has huge consequences for robots as an embodied intelligence, as an application, because the types of models we've built, Gemini and also now Veo, and we'll be combining those things together at some point in the future, is we've always built Gemini, our foundation model, to be multimodal from the beginning.
Now, the interesting thing is that I think this has huge consequences for robots as an embodied intelligence, as an application, because the types of models we've built, Gemini and also now Veo, and we'll be combining those things together at some point in the future, is we've always built Gemini, our foundation model, to be multimodal from the beginning.
Now, the interesting thing is that I think this has huge consequences for robots as an embodied intelligence, as an application, because the types of models we've built, Gemini and also now Veo, and we'll be combining those things together at some point in the future, is we've always built Gemini, our foundation model, to be multimodal from the beginning.
And the reason we did that, and we still lead on all the multimodal benchmarks, is because for twofold. One is we have a vision for this idea of a universal digital assistant, an assistant that goes around with you on the digital devices, but also in the real world, maybe on your phone or a glasses device, and actually helps you
And the reason we did that, and we still lead on all the multimodal benchmarks, is because for twofold. One is we have a vision for this idea of a universal digital assistant, an assistant that goes around with you on the digital devices, but also in the real world, maybe on your phone or a glasses device, and actually helps you
And the reason we did that, and we still lead on all the multimodal benchmarks, is because for twofold. One is we have a vision for this idea of a universal digital assistant, an assistant that goes around with you on the digital devices, but also in the real world, maybe on your phone or a glasses device, and actually helps you
in the real world, like recommend things to you, help you navigate around, help with physical things in the world, like cooking, stuff like that. And for that to work, you obviously need to understand the context that you're in. It's not just the language I'm typing into a chatbot. You actually have to understand the 3D world I'm living in, right?
in the real world, like recommend things to you, help you navigate around, help with physical things in the world, like cooking, stuff like that. And for that to work, you obviously need to understand the context that you're in. It's not just the language I'm typing into a chatbot. You actually have to understand the 3D world I'm living in, right?
in the real world, like recommend things to you, help you navigate around, help with physical things in the world, like cooking, stuff like that. And for that to work, you obviously need to understand the context that you're in. It's not just the language I'm typing into a chatbot. You actually have to understand the 3D world I'm living in, right?
I think to be a really good assistant, you need to do that. But the second thing is, of course, is exactly what you need for robotics as well. And we released our first big sort of Gemini robotics work, which has caused a bit of a stir.
I think to be a really good assistant, you need to do that. But the second thing is, of course, is exactly what you need for robotics as well. And we released our first big sort of Gemini robotics work, which has caused a bit of a stir.
I think to be a really good assistant, you need to do that. But the second thing is, of course, is exactly what you need for robotics as well. And we released our first big sort of Gemini robotics work, which has caused a bit of a stir.
And that's the beginning of showcasing what we can do with these multimodal models that do understand physics of the world with a little bit of robotics fine-tuning on top to do with the actions, the motor actions and the planning a robot needs to do. And it looks like it's going to work.
And that's the beginning of showcasing what we can do with these multimodal models that do understand physics of the world with a little bit of robotics fine-tuning on top to do with the actions, the motor actions and the planning a robot needs to do. And it looks like it's going to work.
And that's the beginning of showcasing what we can do with these multimodal models that do understand physics of the world with a little bit of robotics fine-tuning on top to do with the actions, the motor actions and the planning a robot needs to do. And it looks like it's going to work.
So actually now I think these general models are actually going to transfer to the embodied robotic setting without too much extra sort of special casing or extra data or extra effort, which is probably not what most people, even the top roboticists would have predicted five years ago.
So actually now I think these general models are actually going to transfer to the embodied robotic setting without too much extra sort of special casing or extra data or extra effort, which is probably not what most people, even the top roboticists would have predicted five years ago.
So actually now I think these general models are actually going to transfer to the embodied robotic setting without too much extra sort of special casing or extra data or extra effort, which is probably not what most people, even the top roboticists would have predicted five years ago.
Well, look, of course, we sort of pioneered all that area of thinking systems because that's what our original gaming systems all did, right? Go, AlphaGo, but actually most famously AlphaZero, which was our follow-up system that could play any two-player game. And there, you always have to think about your time budget, your compute budget you've got to actually do the planning part, right?
Well, look, of course, we sort of pioneered all that area of thinking systems because that's what our original gaming systems all did, right? Go, AlphaGo, but actually most famously AlphaZero, which was our follow-up system that could play any two-player game. And there, you always have to think about your time budget, your compute budget you've got to actually do the planning part, right?
Well, look, of course, we sort of pioneered all that area of thinking systems because that's what our original gaming systems all did, right? Go, AlphaGo, but actually most famously AlphaZero, which was our follow-up system that could play any two-player game. And there, you always have to think about your time budget, your compute budget you've got to actually do the planning part, right?
So the model you can pre-train, just like we do with our foundation models today. So you can play millions of games offline, and then you have your model of chess or your model of Go or whatever it is. But at test time, at runtime, you've only got one minute to think about your move, right? One minute times how many computers you've got running. So that's still a limited compute budget.
So the model you can pre-train, just like we do with our foundation models today. So you can play millions of games offline, and then you have your model of chess or your model of Go or whatever it is. But at test time, at runtime, you've only got one minute to think about your move, right? One minute times how many computers you've got running. So that's still a limited compute budget.
So the model you can pre-train, just like we do with our foundation models today. So you can play millions of games offline, and then you have your model of chess or your model of Go or whatever it is. But at test time, at runtime, you've only got one minute to think about your move, right? One minute times how many computers you've got running. So that's still a limited compute budget.
So what's very interesting today is there's this trade-off between do you use a more expensive, larger base model, foundation model, right? So in our case, we have different size names like Gemini Flash or Pro or even bigger, which is Ultra. But those models are more costly to run. So they take longer to run. But they're more accurate and they're more capable.
So what's very interesting today is there's this trade-off between do you use a more expensive, larger base model, foundation model, right? So in our case, we have different size names like Gemini Flash or Pro or even bigger, which is Ultra. But those models are more costly to run. So they take longer to run. But they're more accurate and they're more capable.
So what's very interesting today is there's this trade-off between do you use a more expensive, larger base model, foundation model, right? So in our case, we have different size names like Gemini Flash or Pro or even bigger, which is Ultra. But those models are more costly to run. So they take longer to run. But they're more accurate and they're more capable.
So you can run a bigger model with a shorter number of planning steps, or you can run a very efficient, smaller model that's slightly less powerful, but you can run it for many more steps. And it's actually, currently what we're finding is it's sort of roughly about equal. But of course, what we want to find is the Pareto frontier of that, right?
So you can run a bigger model with a shorter number of planning steps, or you can run a very efficient, smaller model that's slightly less powerful, but you can run it for many more steps. And it's actually, currently what we're finding is it's sort of roughly about equal. But of course, what we want to find is the Pareto frontier of that, right?
So you can run a bigger model with a shorter number of planning steps, or you can run a very efficient, smaller model that's slightly less powerful, but you can run it for many more steps. And it's actually, currently what we're finding is it's sort of roughly about equal. But of course, what we want to find is the Pareto frontier of that, right?
Like actually the exact right trade-off of the size of the model and the expense of running that model versus the amount of thinking time and thinking steps that you're able to do per unit of compute time. And I think that's actually fairly cutting-edge research right now that I think all the leading labs are probably experimenting on. And I think there's not a clear answer to that yet.
Like actually the exact right trade-off of the size of the model and the expense of running that model versus the amount of thinking time and thinking steps that you're able to do per unit of compute time. And I think that's actually fairly cutting-edge research right now that I think all the leading labs are probably experimenting on. And I think there's not a clear answer to that yet.
Like actually the exact right trade-off of the size of the model and the expense of running that model versus the amount of thinking time and thinking steps that you're able to do per unit of compute time. And I think that's actually fairly cutting-edge research right now that I think all the leading labs are probably experimenting on. And I think there's not a clear answer to that yet.
I think we are entering a new era in coding, which is gonna be very interesting. And as you say, all the leading labs are pushing on this frontier for many reasons. It's easy to create synthetic data. So that's another reason that everyone's pushing on this vector.
I think we are entering a new era in coding, which is gonna be very interesting. And as you say, all the leading labs are pushing on this frontier for many reasons. It's easy to create synthetic data. So that's another reason that everyone's pushing on this vector.
I think we are entering a new era in coding, which is gonna be very interesting. And as you say, all the leading labs are pushing on this frontier for many reasons. It's easy to create synthetic data. So that's another reason that everyone's pushing on this vector.
And I think we're going to move into a world where, you know, sometimes it's called vibe coding, where you're basically coding with natural language, really, right? And we've seen this before with computers, right? I remember when I first started programming, you know, in the 80s, we were doing assembler. And then of course, that seems crazy now, like why would you do machine code?
And I think we're going to move into a world where, you know, sometimes it's called vibe coding, where you're basically coding with natural language, really, right? And we've seen this before with computers, right? I remember when I first started programming, you know, in the 80s, we were doing assembler. And then of course, that seems crazy now, like why would you do machine code?
And I think we're going to move into a world where, you know, sometimes it's called vibe coding, where you're basically coding with natural language, really, right? And we've seen this before with computers, right? I remember when I first started programming, you know, in the 80s, we were doing assembler. And then of course, that seems crazy now, like why would you do machine code?
You start with C and then you get Python and so on. And really one could see is the natural evolution of going higher and higher up the abstraction stack of programming languages and leaving the more and more of the lower level implementation details to the compiler in a sense.
You start with C and then you get Python and so on. And really one could see is the natural evolution of going higher and higher up the abstraction stack of programming languages and leaving the more and more of the lower level implementation details to the compiler in a sense.
You start with C and then you get Python and so on. And really one could see is the natural evolution of going higher and higher up the abstraction stack of programming languages and leaving the more and more of the lower level implementation details to the compiler in a sense.
And now this is just, you know, one could just view this as the, as the natural sort of final step is, well, we just use natural language. Uh, and then, and then the whole, you know, everything is, is high level program, you know, super high level programming language. Um, and, and, and I think we eventually that's maybe, maybe what we'll get to.
And now this is just, you know, one could just view this as the, as the natural sort of final step is, well, we just use natural language. Uh, and then, and then the whole, you know, everything is, is high level program, you know, super high level programming language. Um, and, and, and I think we eventually that's maybe, maybe what we'll get to.
And now this is just, you know, one could just view this as the, as the natural sort of final step is, well, we just use natural language. Uh, and then, and then the whole, you know, everything is, is high level program, you know, super high level programming language. Um, and, and, and I think we eventually that's maybe, maybe what we'll get to.
And the exciting thing there is that of course it will make accessible coding to a whole new range of people, creatives, right. Who normally would, you know, designers, game designers, app writers that would normally would not have been able to implement their ideas without the help of teams of programmers. So that's going to be pretty exciting, I think, from a creativity point of view.
And the exciting thing there is that of course it will make accessible coding to a whole new range of people, creatives, right. Who normally would, you know, designers, game designers, app writers that would normally would not have been able to implement their ideas without the help of teams of programmers. So that's going to be pretty exciting, I think, from a creativity point of view.
And the exciting thing there is that of course it will make accessible coding to a whole new range of people, creatives, right. Who normally would, you know, designers, game designers, app writers that would normally would not have been able to implement their ideas without the help of teams of programmers. So that's going to be pretty exciting, I think, from a creativity point of view.
But it may also be very good, certainly in the next few years for coders as well, because I think there's, and I think this in general with these AI tools is I think that the people that are going to get most benefit out of them initially will be the experts in that area who also know how to use these tools in precisely the right way.
But it may also be very good, certainly in the next few years for coders as well, because I think there's, and I think this in general with these AI tools is I think that the people that are going to get most benefit out of them initially will be the experts in that area who also know how to use these tools in precisely the right way.
But it may also be very good, certainly in the next few years for coders as well, because I think there's, and I think this in general with these AI tools is I think that the people that are going to get most benefit out of them initially will be the experts in that area who also know how to use these tools in precisely the right way.
Whether that's prompting or interfacing with your existing code base, there's going to be this sort of interim period where I think the current experts who embrace these new tools whether that's filmmakers, game designers, or coders, are going to be superhuman in terms of what they're able to do.
Whether that's prompting or interfacing with your existing code base, there's going to be this sort of interim period where I think the current experts who embrace these new tools whether that's filmmakers, game designers, or coders, are going to be superhuman in terms of what they're able to do.
Whether that's prompting or interfacing with your existing code base, there's going to be this sort of interim period where I think the current experts who embrace these new tools whether that's filmmakers, game designers, or coders, are going to be superhuman in terms of what they're able to do.
And I see that with some film directors and film designer friends of mine who are able to create pitch decks, for example, for new film ideas in a day on their own. And then they can, but it's very high quality pitch deck that they can pitch for a $10 million budget for.
And I see that with some film directors and film designer friends of mine who are able to create pitch decks, for example, for new film ideas in a day on their own. And then they can, but it's very high quality pitch deck that they can pitch for a $10 million budget for.
And I see that with some film directors and film designer friends of mine who are able to create pitch decks, for example, for new film ideas in a day on their own. And then they can, but it's very high quality pitch deck that they can pitch for a $10 million budget for.
And normally they would have had to spend a few tens of thousands of dollars just to get to that pitch deck, which is a huge risk for them. So it becomes, I think there's going to be a whole new incredible set of opportunities. And then there's the question of like, if you think about the creative arts, whether there'll be new...
And normally they would have had to spend a few tens of thousands of dollars just to get to that pitch deck, which is a huge risk for them. So it becomes, I think there's going to be a whole new incredible set of opportunities. And then there's the question of like, if you think about the creative arts, whether there'll be new...
And normally they would have had to spend a few tens of thousands of dollars just to get to that pitch deck, which is a huge risk for them. So it becomes, I think there's going to be a whole new incredible set of opportunities. And then there's the question of like, if you think about the creative arts, whether there'll be new...
ways of working much more fluid, instead of doing, you know, Adobe Photoshop or something, you actually co-creating this thing with this fluid responsive tool.
ways of working much more fluid, instead of doing, you know, Adobe Photoshop or something, you actually co-creating this thing with this fluid responsive tool.
ways of working much more fluid, instead of doing, you know, Adobe Photoshop or something, you actually co-creating this thing with this fluid responsive tool.
And, um, that could be kind of feel more like minority report or something, you know, I imagine with the kind of interface and there's this thing swirling around you and you're, and you're kind of, but it will, it will require people to get used to a very new workflow, um, to take like maximum advantage of that. But I think when they do, it will be probably incredible for those people.
And, um, that could be kind of feel more like minority report or something, you know, I imagine with the kind of interface and there's this thing swirling around you and you're, and you're kind of, but it will, it will require people to get used to a very new workflow, um, to take like maximum advantage of that. But I think when they do, it will be probably incredible for those people.
And, um, that could be kind of feel more like minority report or something, you know, I imagine with the kind of interface and there's this thing swirling around you and you're, and you're kind of, but it will, it will require people to get used to a very new workflow, um, to take like maximum advantage of that. But I think when they do, it will be probably incredible for those people.
They'll be like 10x more productive.
They'll be like 10x more productive.
They'll be like 10x more productive.
I think, first of all, we live in a multimodal world, right? And we have our five senses and that's what makes us human. So if we want our systems to be brilliant tools or fantastic assistants, I think in the end, they're going to have to understand the world, the spatial temporal world that we live in. not just our linguistic maths world, right? Abstract thinking world.
I think, first of all, we live in a multimodal world, right? And we have our five senses and that's what makes us human. So if we want our systems to be brilliant tools or fantastic assistants, I think in the end, they're going to have to understand the world, the spatial temporal world that we live in. not just our linguistic maths world, right? Abstract thinking world.
I think, first of all, we live in a multimodal world, right? And we have our five senses and that's what makes us human. So if we want our systems to be brilliant tools or fantastic assistants, I think in the end, they're going to have to understand the world, the spatial temporal world that we live in. not just our linguistic maths world, right? Abstract thinking world.
I think that they'll need to be able to act in and plan in and process things in the real world and understand the real world. I think that computer, you know, sort of the potential for robotics is huge. I don't think it's had its chat GPT or its alpha fold moment yet, say in science and language, right? Or alpha go moment. I think that's to come, but I think we're close.
I think that they'll need to be able to act in and plan in and process things in the real world and understand the real world. I think that computer, you know, sort of the potential for robotics is huge. I don't think it's had its chat GPT or its alpha fold moment yet, say in science and language, right? Or alpha go moment. I think that's to come, but I think we're close.
I think that they'll need to be able to act in and plan in and process things in the real world and understand the real world. I think that computer, you know, sort of the potential for robotics is huge. I don't think it's had its chat GPT or its alpha fold moment yet, say in science and language, right? Or alpha go moment. I think that's to come, but I think we're close.
And as we talked about before, I think in order for that to happen, I think that the shortest path I see that happening on now is these general multimodal models being eventually good enough. And maybe we're not very far away from that to sort of install on a robot. perhaps a humanoid robot with the cameras.
And as we talked about before, I think in order for that to happen, I think that the shortest path I see that happening on now is these general multimodal models being eventually good enough. And maybe we're not very far away from that to sort of install on a robot. perhaps a humanoid robot with the cameras.
And as we talked about before, I think in order for that to happen, I think that the shortest path I see that happening on now is these general multimodal models being eventually good enough. And maybe we're not very far away from that to sort of install on a robot. perhaps a humanoid robot with the cameras.
Now there's additional challenges of you've got to fit it locally or maybe on the local chips to have the latency fast enough and so on. But as we all know, just wait a couple of years and those systems that are state of the art today will fit on a little mobile chip tomorrow. So I think it's very exciting multimodal from that point of view, robotics, assistance.
Now there's additional challenges of you've got to fit it locally or maybe on the local chips to have the latency fast enough and so on. But as we all know, just wait a couple of years and those systems that are state of the art today will fit on a little mobile chip tomorrow. So I think it's very exciting multimodal from that point of view, robotics, assistance.
Now there's additional challenges of you've got to fit it locally or maybe on the local chips to have the latency fast enough and so on. But as we all know, just wait a couple of years and those systems that are state of the art today will fit on a little mobile chip tomorrow. So I think it's very exciting multimodal from that point of view, robotics, assistance.
And then finally, I think also for creativity, I think we're the first model in the world, Gemini 2.0, that you can try now in AI Studio that allows native image generation. So not calling a separate program, you know, in this L separate model, in our case, Imogen 3, you know, which you can try separately, but actually Gemini itself natively coming up in the chat flow of images.
And then finally, I think also for creativity, I think we're the first model in the world, Gemini 2.0, that you can try now in AI Studio that allows native image generation. So not calling a separate program, you know, in this L separate model, in our case, Imogen 3, you know, which you can try separately, but actually Gemini itself natively coming up in the chat flow of images.
And then finally, I think also for creativity, I think we're the first model in the world, Gemini 2.0, that you can try now in AI Studio that allows native image generation. So not calling a separate program, you know, in this L separate model, in our case, Imogen 3, you know, which you can try separately, but actually Gemini itself natively coming up in the chat flow of images.
And I think people seem to be really enjoying using that. So it's sort of like you're now talking to a multimodal chatbot And so you can get it to express emotions in pictures, or you can give it a picture and then tell it to modify it and then continue to work on it with word descriptions. Can you remove that background? Can you do this?
And I think people seem to be really enjoying using that. So it's sort of like you're now talking to a multimodal chatbot And so you can get it to express emotions in pictures, or you can give it a picture and then tell it to modify it and then continue to work on it with word descriptions. Can you remove that background? Can you do this?
And I think people seem to be really enjoying using that. So it's sort of like you're now talking to a multimodal chatbot And so you can get it to express emotions in pictures, or you can give it a picture and then tell it to modify it and then continue to work on it with word descriptions. Can you remove that background? Can you do this?
So this goes back to the earlier thing we said about programming or any of these creative things in a new workflow. I think we're just seeing the glimpse of that if you try out this new Gemini 2 experimental model of how that might look in image creation. And that's just the beginning. Of course, it will work with video and coding and all sorts of things.
So this goes back to the earlier thing we said about programming or any of these creative things in a new workflow. I think we're just seeing the glimpse of that if you try out this new Gemini 2 experimental model of how that might look in image creation. And that's just the beginning. Of course, it will work with video and coding and all sorts of things.
So this goes back to the earlier thing we said about programming or any of these creative things in a new workflow. I think we're just seeing the glimpse of that if you try out this new Gemini 2 experimental model of how that might look in image creation. And that's just the beginning. Of course, it will work with video and coding and all sorts of things.
We started DeepMind in London and still headquartered here for several reasons. I mean, this is where I grew up. That's what I know. It's where I had all the contacts that I had. But the competitive reasons were that we felt that the talent in the UK and in Europe was the coming out of universities was the equivalent of the top US ones.
We started DeepMind in London and still headquartered here for several reasons. I mean, this is where I grew up. That's what I know. It's where I had all the contacts that I had. But the competitive reasons were that we felt that the talent in the UK and in Europe was the coming out of universities was the equivalent of the top US ones.
We started DeepMind in London and still headquartered here for several reasons. I mean, this is where I grew up. That's what I know. It's where I had all the contacts that I had. But the competitive reasons were that we felt that the talent in the UK and in Europe was the coming out of universities was the equivalent of the top US ones.
Cambridge, my alma mater, and Oxford, they're up there with the MITs and Harvards and the Ivy Leagues. I think they're always in the top 10 there together on the university world tables.
Cambridge, my alma mater, and Oxford, they're up there with the MITs and Harvards and the Ivy Leagues. I think they're always in the top 10 there together on the university world tables.
Cambridge, my alma mater, and Oxford, they're up there with the MITs and Harvards and the Ivy Leagues. I think they're always in the top 10 there together on the university world tables.
But if you, this is certainly true in 2010, if you were coming, say you had a PhD in physics out of Cambridge and you didn't want to work in finance at a hedge fund in the city, but you wanted to stay in the UK and be intellectually challenged, there were not that many options for you, right? There were not that many deep tech startups. So we were the first, really, to prove that could be done.
But if you, this is certainly true in 2010, if you were coming, say you had a PhD in physics out of Cambridge and you didn't want to work in finance at a hedge fund in the city, but you wanted to stay in the UK and be intellectually challenged, there were not that many options for you, right? There were not that many deep tech startups. So we were the first, really, to prove that could be done.
But if you, this is certainly true in 2010, if you were coming, say you had a PhD in physics out of Cambridge and you didn't want to work in finance at a hedge fund in the city, but you wanted to stay in the UK and be intellectually challenged, there were not that many options for you, right? There were not that many deep tech startups. So we were the first, really, to prove that could be done.
And actually, we were a big draw for the whole of Europe. So we got the best people from the technical universities in Munich and in Switzerland and so on. And for a long while, that was a huge competitive advantage. And also, salaries were cheaper here than in the West Coast. And you weren't competing against the big incumbents, right? And also, it was conducive.
And actually, we were a big draw for the whole of Europe. So we got the best people from the technical universities in Munich and in Switzerland and so on. And for a long while, that was a huge competitive advantage. And also, salaries were cheaper here than in the West Coast. And you weren't competing against the big incumbents, right? And also, it was conducive.
And actually, we were a big draw for the whole of Europe. So we got the best people from the technical universities in Munich and in Switzerland and so on. And for a long while, that was a huge competitive advantage. And also, salaries were cheaper here than in the West Coast. And you weren't competing against the big incumbents, right? And also, it was conducive.
The other reason I chose to do that was... I knew that AGI, which was our plan from the beginning, you know, solve intelligence and then use it to solve everything else. That was where we articulated our mission statement. And I still like that framing of it. It was a 20-year mission.
The other reason I chose to do that was... I knew that AGI, which was our plan from the beginning, you know, solve intelligence and then use it to solve everything else. That was where we articulated our mission statement. And I still like that framing of it. It was a 20-year mission.
The other reason I chose to do that was... I knew that AGI, which was our plan from the beginning, you know, solve intelligence and then use it to solve everything else. That was where we articulated our mission statement. And I still like that framing of it. It was a 20-year mission.
And if you're on a 20-year mission, and we're now 15 years in, and I think we're sort of on track, unbelievably, right? Which is strange for any 20-year mission. But you don't want to be too distracted on the way in a deep technology, deep scientific mission.
And if you're on a 20-year mission, and we're now 15 years in, and I think we're sort of on track, unbelievably, right? Which is strange for any 20-year mission. But you don't want to be too distracted on the way in a deep technology, deep scientific mission.
And if you're on a 20-year mission, and we're now 15 years in, and I think we're sort of on track, unbelievably, right? Which is strange for any 20-year mission. But you don't want to be too distracted on the way in a deep technology, deep scientific mission.
So one of the issues I find with Silicon Valley is lots of benefits, obviously, contacts and support systems and funding and amazing things and the amount of talent there, the density of talent. But it is quite distracting, I feel. Like everyone and their dog is trying to do a startup that they think is going to change the world, but it's just a photo app or something.
So one of the issues I find with Silicon Valley is lots of benefits, obviously, contacts and support systems and funding and amazing things and the amount of talent there, the density of talent. But it is quite distracting, I feel. Like everyone and their dog is trying to do a startup that they think is going to change the world, but it's just a photo app or something.
So one of the issues I find with Silicon Valley is lots of benefits, obviously, contacts and support systems and funding and amazing things and the amount of talent there, the density of talent. But it is quite distracting, I feel. Like everyone and their dog is trying to do a startup that they think is going to change the world, but it's just a photo app or something.
Well, first of all, thanks for having me on the podcast. Chess for me is where it all started actually in gaming. And I started playing chess when I was four, very seriously, all through my childhood, playing for most of the England junior teams, captaining a lot of the teams.
Well, first of all, thanks for having me on the podcast. Chess for me is where it all started actually in gaming. And I started playing chess when I was four, very seriously, all through my childhood, playing for most of the England junior teams, captaining a lot of the teams.
Well, first of all, thanks for having me on the podcast. Chess for me is where it all started actually in gaming. And I started playing chess when I was four, very seriously, all through my childhood, playing for most of the England junior teams, captaining a lot of the teams.
And then the cafes are filled with this. Of course, it leads to some great things, but it's also a lot of noise if one actually wants to commit to a long-term mission that you think is the most important thing ever
And then the cafes are filled with this. Of course, it leads to some great things, but it's also a lot of noise if one actually wants to commit to a long-term mission that you think is the most important thing ever
And then the cafes are filled with this. Of course, it leads to some great things, but it's also a lot of noise if one actually wants to commit to a long-term mission that you think is the most important thing ever
and you don't want to be too, you know, you and your staff and want to be too distracted and like, oh, I could make a, maybe I could make a hundred million though if I jumped and did this, you know, quickly did this gaming app or something, right? And I think that's sort of the milieu that you're in, in the Valley, at least back then, maybe this is less true now.
and you don't want to be too, you know, you and your staff and want to be too distracted and like, oh, I could make a, maybe I could make a hundred million though if I jumped and did this, you know, quickly did this gaming app or something, right? And I think that's sort of the milieu that you're in, in the Valley, at least back then, maybe this is less true now.
and you don't want to be too, you know, you and your staff and want to be too distracted and like, oh, I could make a, maybe I could make a hundred million though if I jumped and did this, you know, quickly did this gaming app or something, right? And I think that's sort of the milieu that you're in, in the Valley, at least back then, maybe this is less true now.
There's probably more mission focused startups now, but I also, I kind of also wanted to prove it could be done elsewhere. And then the final reason I think it's important is that AI is going to affect everything the whole world, right? It's going to affect every industry. It's going to affect every country. It's going to be the most transformative technology ever, in my opinion.
There's probably more mission focused startups now, but I also, I kind of also wanted to prove it could be done elsewhere. And then the final reason I think it's important is that AI is going to affect everything the whole world, right? It's going to affect every industry. It's going to affect every country. It's going to be the most transformative technology ever, in my opinion.
There's probably more mission focused startups now, but I also, I kind of also wanted to prove it could be done elsewhere. And then the final reason I think it's important is that AI is going to affect everything the whole world, right? It's going to affect every industry. It's going to affect every country. It's going to be the most transformative technology ever, in my opinion.
So if that's true, and it's going to be like electricity or fire, you know, more impactful than even the internet or mobile, then
So if that's true, and it's going to be like electricity or fire, you know, more impactful than even the internet or mobile, then
So if that's true, and it's going to be like electricity or fire, you know, more impactful than even the internet or mobile, then
I think it's important that the whole world participates in its design and with the different value systems that we think are out there that are, you know, philosophies that are, you know, are good philosophies and, you know, from democratic values, you know, Western Europe, US, you know, I think it's important that it's not just a hundred square miles of, you know, a patch of California, right?
I think it's important that the whole world participates in its design and with the different value systems that we think are out there that are, you know, philosophies that are, you know, are good philosophies and, you know, from democratic values, you know, Western Europe, US, you know, I think it's important that it's not just a hundred square miles of, you know, a patch of California, right?
I think it's important that the whole world participates in its design and with the different value systems that we think are out there that are, you know, philosophies that are, you know, are good philosophies and, you know, from democratic values, you know, Western Europe, US, you know, I think it's important that it's not just a hundred square miles of, you know, a patch of California, right?
I do actually think it's important that we get these other inputs, the broader inputs, not just, um, geographically, but also, and I know you agree with this Reid, like different subjects, philosophy, social sciences, uh, economists, um, academia, civil society, not just the tech companies, not just the scientists involved in deciding how this gets built and what it gets used for.
I do actually think it's important that we get these other inputs, the broader inputs, not just, um, geographically, but also, and I know you agree with this Reid, like different subjects, philosophy, social sciences, uh, economists, um, academia, civil society, not just the tech companies, not just the scientists involved in deciding how this gets built and what it gets used for.
I do actually think it's important that we get these other inputs, the broader inputs, not just, um, geographically, but also, and I know you agree with this Reid, like different subjects, philosophy, social sciences, uh, economists, um, academia, civil society, not just the tech companies, not just the scientists involved in deciding how this gets built and what it gets used for.
And I feel that I've always felt that very strongly from the beginning. And I think having some European involvement and some UK involvement at the top table of the innovation is a good thing.
And I feel that I've always felt that very strongly from the beginning. And I think having some European involvement and some UK involvement at the top table of the innovation is a good thing.
And I feel that I've always felt that very strongly from the beginning. And I think having some European involvement and some UK involvement at the top table of the innovation is a good thing.
I've always felt that what are the most important things AI can be used for? And I think there are two. One is human health. That's number one, trying to solve and cure terrible diseases. And then number two is to help with energy, sustainability, and climate, the planet's health, let's call it. So there's human's health, and then there's a planet's health.
I've always felt that what are the most important things AI can be used for? And I think there are two. One is human health. That's number one, trying to solve and cure terrible diseases. And then number two is to help with energy, sustainability, and climate, the planet's health, let's call it. So there's human's health, and then there's a planet's health.
I've always felt that what are the most important things AI can be used for? And I think there are two. One is human health. That's number one, trying to solve and cure terrible diseases. And then number two is to help with energy, sustainability, and climate, the planet's health, let's call it. So there's human's health, and then there's a planet's health.
And those are the two areas that we have focused on in our science group, which I think is, you know, fairly unique amongst the AI labs, actually, in terms of how much we push that from the beginning. And then, and protein folding specifically was this canonical for me. I sort of came across it when I was an undergrad in Cambridge, you know, 30 years ago.
And those are the two areas that we have focused on in our science group, which I think is, you know, fairly unique amongst the AI labs, actually, in terms of how much we push that from the beginning. And then, and protein folding specifically was this canonical for me. I sort of came across it when I was an undergrad in Cambridge, you know, 30 years ago.
And those are the two areas that we have focused on in our science group, which I think is, you know, fairly unique amongst the AI labs, actually, in terms of how much we push that from the beginning. And then, and protein folding specifically was this canonical for me. I sort of came across it when I was an undergrad in Cambridge, you know, 30 years ago.
And for a long while, my main aim was to become a professional chess player, a grandmaster, maybe one day possibly a world champion. And that was my whole childhood, really. Every spare moment, not at school, I was going around the world playing chess against adults in international tournaments.
And for a long while, my main aim was to become a professional chess player, a grandmaster, maybe one day possibly a world champion. And that was my whole childhood, really. Every spare moment, not at school, I was going around the world playing chess against adults in international tournaments.
And for a long while, my main aim was to become a professional chess player, a grandmaster, maybe one day possibly a world champion. And that was my whole childhood, really. Every spare moment, not at school, I was going around the world playing chess against adults in international tournaments.
And it's always stuck with me as this fantastic puzzle that would unlock so many possibilities. You know, the structure of proteins, everything in life depends on proteins. and we need to understand the structure so we know their function.
And it's always stuck with me as this fantastic puzzle that would unlock so many possibilities. You know, the structure of proteins, everything in life depends on proteins. and we need to understand the structure so we know their function.
And it's always stuck with me as this fantastic puzzle that would unlock so many possibilities. You know, the structure of proteins, everything in life depends on proteins. and we need to understand the structure so we know their function.
And if we know the function, then we can understand what goes wrong in disease, and we can design drugs and molecules that will bind to the right part of the surface of the protein if you know the 3D structure. So it's a fascinating problem. It goes to all of the computational things we were discussing earlier as well.
And if we know the function, then we can understand what goes wrong in disease, and we can design drugs and molecules that will bind to the right part of the surface of the protein if you know the 3D structure. So it's a fascinating problem. It goes to all of the computational things we were discussing earlier as well.
And if we know the function, then we can understand what goes wrong in disease, and we can design drugs and molecules that will bind to the right part of the surface of the protein if you know the 3D structure. So it's a fascinating problem. It goes to all of the computational things we were discussing earlier as well.
Can you enumerate, can you see through this forest of possibilities, all these different ways a protein could fold? Some people estimate that Leventhal, very famously in the 1960s, estimated an average protein can fold in 10 to 300 possible ways. So how do you enumerate those astronomical possibilities? And yet it is possible with these learning systems. And that's what we did with AlphaFold.
Can you enumerate, can you see through this forest of possibilities, all these different ways a protein could fold? Some people estimate that Leventhal, very famously in the 1960s, estimated an average protein can fold in 10 to 300 possible ways. So how do you enumerate those astronomical possibilities? And yet it is possible with these learning systems. And that's what we did with AlphaFold.
Can you enumerate, can you see through this forest of possibilities, all these different ways a protein could fold? Some people estimate that Leventhal, very famously in the 1960s, estimated an average protein can fold in 10 to 300 possible ways. So how do you enumerate those astronomical possibilities? And yet it is possible with these learning systems. And that's what we did with AlphaFold.
And then we spun out a company, Isomorphic, and I know Reid's very interested in this area too, with his new company of like, if we can... reduce the time it takes to discover a protein structure from, it used to take a PhD student their entire PhD as a rule of thumb to discover one protein structure. So four or five years. And there's 200 million proteins known to science.
And then we spun out a company, Isomorphic, and I know Reid's very interested in this area too, with his new company of like, if we can... reduce the time it takes to discover a protein structure from, it used to take a PhD student their entire PhD as a rule of thumb to discover one protein structure. So four or five years. And there's 200 million proteins known to science.
And then we spun out a company, Isomorphic, and I know Reid's very interested in this area too, with his new company of like, if we can... reduce the time it takes to discover a protein structure from, it used to take a PhD student their entire PhD as a rule of thumb to discover one protein structure. So four or five years. And there's 200 million proteins known to science.
And we folded them all in one year. So we did a billion years of PhD time in one year is another way you can think of it. And then gave it to the world freely to use. And 2 million researchers around the world have used it. And we spun out a new company, Isomorphic, to try and go further downstream now and develop the drugs needed and try and reduce that time.
And we folded them all in one year. So we did a billion years of PhD time in one year is another way you can think of it. And then gave it to the world freely to use. And 2 million researchers around the world have used it. And we spun out a new company, Isomorphic, to try and go further downstream now and develop the drugs needed and try and reduce that time.
And we folded them all in one year. So we did a billion years of PhD time in one year is another way you can think of it. And then gave it to the world freely to use. And 2 million researchers around the world have used it. And we spun out a new company, Isomorphic, to try and go further downstream now and develop the drugs needed and try and reduce that time.
There's lots of movies that I've watched that have been super inspiring for me. Things like, even like Blade Runner is probably my favorite sci-fi movie. But maybe it's not that optimistic. So if you want an optimistic thing, I would say the Culture series by Ian Banks.
There's lots of movies that I've watched that have been super inspiring for me. Things like, even like Blade Runner is probably my favorite sci-fi movie. But maybe it's not that optimistic. So if you want an optimistic thing, I would say the Culture series by Ian Banks.
There's lots of movies that I've watched that have been super inspiring for me. Things like, even like Blade Runner is probably my favorite sci-fi movie. But maybe it's not that optimistic. So if you want an optimistic thing, I would say the Culture series by Ian Banks.
I think that's the best depiction of a post-AGI universe where you've basically got societies of AIs and humans and kind of alien species actually, and sort of maximum human flourishing across the galaxy. That's a kind of amazing, compelling future that I would hope for humanity.
I think that's the best depiction of a post-AGI universe where you've basically got societies of AIs and humans and kind of alien species actually, and sort of maximum human flourishing across the galaxy. That's a kind of amazing, compelling future that I would hope for humanity.
I think that's the best depiction of a post-AGI universe where you've basically got societies of AIs and humans and kind of alien species actually, and sort of maximum human flourishing across the galaxy. That's a kind of amazing, compelling future that I would hope for humanity.
The questions I sort of often wonder why people don't discuss a lot more, including with me, are some of the really fundamental properties of reality that actually drove me in the beginning when I was a kid to think about building AI to help us sort of this ultimate tool for science. So for example, you know, I don't understand why people don't worry more about what is time.
The questions I sort of often wonder why people don't discuss a lot more, including with me, are some of the really fundamental properties of reality that actually drove me in the beginning when I was a kid to think about building AI to help us sort of this ultimate tool for science. So for example, you know, I don't understand why people don't worry more about what is time.
The questions I sort of often wonder why people don't discuss a lot more, including with me, are some of the really fundamental properties of reality that actually drove me in the beginning when I was a kid to think about building AI to help us sort of this ultimate tool for science. So for example, you know, I don't understand why people don't worry more about what is time.
And then around 11 years old, I sort of had an epiphany really, that although I love chess and I still love chess today, is it really something that one should spend your entire life on? Is it the best use of my mind? So that was one thing that was troubling me a little bit.
And then around 11 years old, I sort of had an epiphany really, that although I love chess and I still love chess today, is it really something that one should spend your entire life on? Is it the best use of my mind? So that was one thing that was troubling me a little bit.
And then around 11 years old, I sort of had an epiphany really, that although I love chess and I still love chess today, is it really something that one should spend your entire life on? Is it the best use of my mind? So that was one thing that was troubling me a little bit.
What is, what is, you know, what is gravity? What, what, you know, or the, basically the fundamental fabric of reality, like, which is sort of staring us in the face all the time, all these very obvious things that impact us all the time. And we, we don't really have any idea how it works. And I don't know why that it doesn't trouble people more. It troubles me.
What is, what is, you know, what is gravity? What, what, you know, or the, basically the fundamental fabric of reality, like, which is sort of staring us in the face all the time, all these very obvious things that impact us all the time. And we, we don't really have any idea how it works. And I don't know why that it doesn't trouble people more. It troubles me.
What is, what is, you know, what is gravity? What, what, you know, or the, basically the fundamental fabric of reality, like, which is sort of staring us in the face all the time, all these very obvious things that impact us all the time. And we, we don't really have any idea how it works. And I don't know why that it doesn't trouble people more. It troubles me.
And, uh, and, and, you know, I'd love to have more debates with people about those things, but, uh, actually most people don't seem to, you know, they seem to sort of shy away from those topics.
And, uh, and, and, you know, I'd love to have more debates with people about those things, but, uh, actually most people don't seem to, you know, they seem to sort of shy away from those topics.
And, uh, and, and, you know, I'd love to have more debates with people about those things, but, uh, actually most people don't seem to, you know, they seem to sort of shy away from those topics.
That's a tough one because AI is so general. It's almost touching what industry is outside of the AI industry. I'm not sure there's many. Maybe the progress going on in quantum technology. um, is, is kind of interesting. I, I still believe AI is going to get built first and then we'll maybe help us perfect our quantum systems.
That's a tough one because AI is so general. It's almost touching what industry is outside of the AI industry. I'm not sure there's many. Maybe the progress going on in quantum technology. um, is, is kind of interesting. I, I still believe AI is going to get built first and then we'll maybe help us perfect our quantum systems.
That's a tough one because AI is so general. It's almost touching what industry is outside of the AI industry. I'm not sure there's many. Maybe the progress going on in quantum technology. um, is, is kind of interesting. I, I still believe AI is going to get built first and then we'll maybe help us perfect our quantum systems.
But I have, you know, ongoing bets with some of my quantum friends like Hartman Nevin on, they're going to build quantum systems first. And then that will, that will help us accelerate AI. So I always keep a close eye on the, on the advances going on with, with quantum computing systems.
But I have, you know, ongoing bets with some of my quantum friends like Hartman Nevin on, they're going to build quantum systems first. And then that will, that will help us accelerate AI. So I always keep a close eye on the, on the advances going on with, with quantum computing systems.
But I have, you know, ongoing bets with some of my quantum friends like Hartman Nevin on, they're going to build quantum systems first. And then that will, that will help us accelerate AI. So I always keep a close eye on the, on the advances going on with, with quantum computing systems.
Well, what I hope for next 10, 15 years is what we're doing in medicine to really have new breakthroughs. And I think maybe in the next 10, 15 years, we can actually have a real crack at solving all disease, right? That's the mission of Isomorphic. And I think with AlphaFold, we showed what the potential was. to sort of do what I like to call science at digital speed.
Well, what I hope for next 10, 15 years is what we're doing in medicine to really have new breakthroughs. And I think maybe in the next 10, 15 years, we can actually have a real crack at solving all disease, right? That's the mission of Isomorphic. And I think with AlphaFold, we showed what the potential was. to sort of do what I like to call science at digital speed.
Well, what I hope for next 10, 15 years is what we're doing in medicine to really have new breakthroughs. And I think maybe in the next 10, 15 years, we can actually have a real crack at solving all disease, right? That's the mission of Isomorphic. And I think with AlphaFold, we showed what the potential was. to sort of do what I like to call science at digital speed.
And why couldn't that also be applied to finding medicines? And so my hope is 10, 15 years time, we'll look back on the medicine we have today, a bit like how we look back on medieval times and how we used to do medicine then. And that would be, I think, the most incredible benefit we could imagine from AI.
And why couldn't that also be applied to finding medicines? And so my hope is 10, 15 years time, we'll look back on the medicine we have today, a bit like how we look back on medieval times and how we used to do medicine then. And that would be, I think, the most incredible benefit we could imagine from AI.
And why couldn't that also be applied to finding medicines? And so my hope is 10, 15 years time, we'll look back on the medicine we have today, a bit like how we look back on medieval times and how we used to do medicine then. And that would be, I think, the most incredible benefit we could imagine from AI.
But then the other thing was, as we were going to training camps with the England chess team, we started to use early chess computers to try and improve your chess. And I remember thinking, Of course, we were supposed to be focusing on improving the chess openings and chess theory and tactics.
But then the other thing was, as we were going to training camps with the England chess team, we started to use early chess computers to try and improve your chess. And I remember thinking, Of course, we were supposed to be focusing on improving the chess openings and chess theory and tactics.
But then the other thing was, as we were going to training camps with the England chess team, we started to use early chess computers to try and improve your chess. And I remember thinking, Of course, we were supposed to be focusing on improving the chess openings and chess theory and tactics.
But actually, I was more fascinated by the fact that someone had programmed this inanimate lump of plastic to play very good chess against me. And I was fascinated by how that was done. And I really wanted to understand that and then eventually try and make my own chess programs.
But actually, I was more fascinated by the fact that someone had programmed this inanimate lump of plastic to play very good chess against me. And I was fascinated by how that was done. And I really wanted to understand that and then eventually try and make my own chess programs.
But actually, I was more fascinated by the fact that someone had programmed this inanimate lump of plastic to play very good chess against me. And I was fascinated by how that was done. And I really wanted to understand that and then eventually try and make my own chess programs.
Yeah, well, look, first of all, I mean, it's great. Your son's playing chess and I think it's fantastic. I'm a big advocate for teaching chess in schools as a part of the curriculum. I think it's fantastic training for the mind, just like doing maths or programming would be.
Yeah, well, look, first of all, I mean, it's great. Your son's playing chess and I think it's fantastic. I'm a big advocate for teaching chess in schools as a part of the curriculum. I think it's fantastic training for the mind, just like doing maths or programming would be.
Yeah, well, look, first of all, I mean, it's great. Your son's playing chess and I think it's fantastic. I'm a big advocate for teaching chess in schools as a part of the curriculum. I think it's fantastic training for the mind, just like doing maths or programming would be.
And it's certainly affected the way, you know, the way I approach problems and problem solve and visualize solutions and plan, you know, teaches you all these amazing meta skills, dealing with pressure. So you sort of learn all of that as a young kid, which is fantastic for anything else you're going to do. And as far as Deep Blue goes, you're right.
And it's certainly affected the way, you know, the way I approach problems and problem solve and visualize solutions and plan, you know, teaches you all these amazing meta skills, dealing with pressure. So you sort of learn all of that as a young kid, which is fantastic for anything else you're going to do. And as far as Deep Blue goes, you're right.
And it's certainly affected the way, you know, the way I approach problems and problem solve and visualize solutions and plan, you know, teaches you all these amazing meta skills, dealing with pressure. So you sort of learn all of that as a young kid, which is fantastic for anything else you're going to do. And as far as Deep Blue goes, you're right.
Most of these early chess programs, and then Deep Blue became the pinnacle of that, were these types of expert systems, which at the time was the favored way of approaching AI, where actually it's the programmers that solve the problem, in this case, playing chess.
Most of these early chess programs, and then Deep Blue became the pinnacle of that, were these types of expert systems, which at the time was the favored way of approaching AI, where actually it's the programmers that solve the problem, in this case, playing chess.
Most of these early chess programs, and then Deep Blue became the pinnacle of that, were these types of expert systems, which at the time was the favored way of approaching AI, where actually it's the programmers that solve the problem, in this case, playing chess.
And then they encapsulate that solution in a set of heuristics and rules, which guides a kind of brute force search towards, in this case, making a good chess move. And I always had this, although I was fascinated by these Ailey chess programs that they could do that, I was also slightly disappointed by them.
And then they encapsulate that solution in a set of heuristics and rules, which guides a kind of brute force search towards, in this case, making a good chess move. And I always had this, although I was fascinated by these Ailey chess programs that they could do that, I was also slightly disappointed by them.
And then they encapsulate that solution in a set of heuristics and rules, which guides a kind of brute force search towards, in this case, making a good chess move. And I always had this, although I was fascinated by these Ailey chess programs that they could do that, I was also slightly disappointed by them.
And actually, by the time it got to Deep Blue, I was already studying at Cambridge in my undergrad. I was actually more impressed with Kasparov's mind because I'd already started studying neuroscience than I was with the machine because he was this brute of a machine. All it can do is play chess.
And actually, by the time it got to Deep Blue, I was already studying at Cambridge in my undergrad. I was actually more impressed with Kasparov's mind because I'd already started studying neuroscience than I was with the machine because he was this brute of a machine. All it can do is play chess.
And actually, by the time it got to Deep Blue, I was already studying at Cambridge in my undergrad. I was actually more impressed with Kasparov's mind because I'd already started studying neuroscience than I was with the machine because he was this brute of a machine. All it can do is play chess.
And then Kasparov can play chess at the same sort of roughly the same level, but also can do all the other things. amazing things that humans can do. And so I thought, doesn't that speak to the wonderfulness of the human mind? And it also, more importantly, means something was missing from very fundamental from Deep Blue and these expert system approaches to AI, right?
And then Kasparov can play chess at the same sort of roughly the same level, but also can do all the other things. amazing things that humans can do. And so I thought, doesn't that speak to the wonderfulness of the human mind? And it also, more importantly, means something was missing from very fundamental from Deep Blue and these expert system approaches to AI, right?
And then Kasparov can play chess at the same sort of roughly the same level, but also can do all the other things. amazing things that humans can do. And so I thought, doesn't that speak to the wonderfulness of the human mind? And it also, more importantly, means something was missing from very fundamental from Deep Blue and these expert system approaches to AI, right?
Very clearly, because Deep Blue did not seem Even though it was a pinnacle of AI at the time, it did not seem intelligent. And what was missing was its ability to learn, learn new things. So for example, it was crazy that Deep Blue could play chess to world champion level, but it couldn't even play tic-tac-toe, right? You'd have to reprogram.
Very clearly, because Deep Blue did not seem Even though it was a pinnacle of AI at the time, it did not seem intelligent. And what was missing was its ability to learn, learn new things. So for example, it was crazy that Deep Blue could play chess to world champion level, but it couldn't even play tic-tac-toe, right? You'd have to reprogram.
Very clearly, because Deep Blue did not seem Even though it was a pinnacle of AI at the time, it did not seem intelligent. And what was missing was its ability to learn, learn new things. So for example, it was crazy that Deep Blue could play chess to world champion level, but it couldn't even play tic-tac-toe, right? You'd have to reprogram.
Nothing in the system would allow it to play tic-tac-toe. So that's odd, right? That's very different to a human grandmaster who should obviously play a simpler game trivially. And then also it was not general, right? In the way that the human mind is. And I think those are the hallmarks.
Nothing in the system would allow it to play tic-tac-toe. So that's odd, right? That's very different to a human grandmaster who should obviously play a simpler game trivially. And then also it was not general, right? In the way that the human mind is. And I think those are the hallmarks.
Nothing in the system would allow it to play tic-tac-toe. So that's odd, right? That's very different to a human grandmaster who should obviously play a simpler game trivially. And then also it was not general, right? In the way that the human mind is. And I think those are the hallmarks.
That's what I took away from that match is those are the hallmarks of intelligence and they were needed if we wanted to crack AI.
That's what I took away from that match is those are the hallmarks of intelligence and they were needed if we wanted to crack AI.
That's what I took away from that match is those are the hallmarks of intelligence and they were needed if we wanted to crack AI.
Yes. Well, look, of course we started DeepMind in 2010 before anyone was working on this in industry and there was barely any work on it in academia. And we partially named the company DeepMind, the deep part because of deep learning. It was also a nod to deep thought in Hitchhiker's Guide to the Galaxy and Deep Blue and other AI things.
Yes. Well, look, of course we started DeepMind in 2010 before anyone was working on this in industry and there was barely any work on it in academia. And we partially named the company DeepMind, the deep part because of deep learning. It was also a nod to deep thought in Hitchhiker's Guide to the Galaxy and Deep Blue and other AI things.
Yes. Well, look, of course we started DeepMind in 2010 before anyone was working on this in industry and there was barely any work on it in academia. And we partially named the company DeepMind, the deep part because of deep learning. It was also a nod to deep thought in Hitchhiker's Guide to the Galaxy and Deep Blue and other AI things.
But it was mostly around the idea we were better on these learning techniques Deep learning and hierarchical neural networks, they've just sort of been invented in seminal work by Jeff Hinton and colleagues in 2006. So it's very, very new.
But it was mostly around the idea we were better on these learning techniques Deep learning and hierarchical neural networks, they've just sort of been invented in seminal work by Jeff Hinton and colleagues in 2006. So it's very, very new.
But it was mostly around the idea we were better on these learning techniques Deep learning and hierarchical neural networks, they've just sort of been invented in seminal work by Jeff Hinton and colleagues in 2006. So it's very, very new.
And reinforcement learning, which has always been a speciality of DeepMind, and the idea of learning from trial and error, learning from your experience, and making plans and acting in the world. And we combine those two things, really, we sort of pioneered doing that, and we called it deep reinforcement learning, these two approaches.
And reinforcement learning, which has always been a speciality of DeepMind, and the idea of learning from trial and error, learning from your experience, and making plans and acting in the world. And we combine those two things, really, we sort of pioneered doing that, and we called it deep reinforcement learning, these two approaches.
And reinforcement learning, which has always been a speciality of DeepMind, and the idea of learning from trial and error, learning from your experience, and making plans and acting in the world. And we combine those two things, really, we sort of pioneered doing that, and we called it deep reinforcement learning, these two approaches.
And deep learning to kind of build a model of the environment or what you were doing, in this case, a game, and then the reinforcement learning to do the planning and the acting and actually accomplish it, be able to build agent systems that could accomplish goals. In the case of games is maximizing the score, winning the game.
And deep learning to kind of build a model of the environment or what you were doing, in this case, a game, and then the reinforcement learning to do the planning and the acting and actually accomplish it, be able to build agent systems that could accomplish goals. In the case of games is maximizing the score, winning the game.
And deep learning to kind of build a model of the environment or what you were doing, in this case, a game, and then the reinforcement learning to do the planning and the acting and actually accomplish it, be able to build agent systems that could accomplish goals. In the case of games is maximizing the score, winning the game.
And we felt that that was actually the entirety of what's needed for intelligence. And the reason that we sort of were pretty confident about that is actually from using the brain as an example, right? Basically, those are the two major components of, of how the brain works. You know, your, your, the brain is a neural network.
And we felt that that was actually the entirety of what's needed for intelligence. And the reason that we sort of were pretty confident about that is actually from using the brain as an example, right? Basically, those are the two major components of, of how the brain works. You know, your, your, the brain is a neural network.
And we felt that that was actually the entirety of what's needed for intelligence. And the reason that we sort of were pretty confident about that is actually from using the brain as an example, right? Basically, those are the two major components of, of how the brain works. You know, your, your, the brain is a neural network.
AI is going to affect the whole world. It's going to affect every industry. It's going to affect every country. It's going to be the most transformative technology ever, in my opinion. So if that's true, and it's going to be like electricity or fire, then I think it's important that the whole world... participates in its design.
AI is going to affect the whole world. It's going to affect every industry. It's going to affect every country. It's going to be the most transformative technology ever, in my opinion. So if that's true, and it's going to be like electricity or fire, then I think it's important that the whole world... participates in its design.
AI is going to affect the whole world. It's going to affect every industry. It's going to affect every country. It's going to be the most transformative technology ever, in my opinion. So if that's true, and it's going to be like electricity or fire, then I think it's important that the whole world... participates in its design.
It's a pattern matching and, and structure finding a system, but then it also has reinforcement learning and this idea of planning and learning from trial and error and trying to maximize reward, which is actually in the, in the human brain and the animal brain, a mammal brain is the dopamine system, uh, implements that a form of reinforcement learning called TD learning.
It's a pattern matching and, and structure finding a system, but then it also has reinforcement learning and this idea of planning and learning from trial and error and trying to maximize reward, which is actually in the, in the human brain and the animal brain, a mammal brain is the dopamine system, uh, implements that a form of reinforcement learning called TD learning.
It's a pattern matching and, and structure finding a system, but then it also has reinforcement learning and this idea of planning and learning from trial and error and trying to maximize reward, which is actually in the, in the human brain and the animal brain, a mammal brain is the dopamine system, uh, implements that a form of reinforcement learning called TD learning.
So that gave us confidence that if we pushed hard enough in this direction, even though no one was really doing that, that eventually this should work, right? Because we have the existence proof of the human mind. And of course, that's why I also studied neuroscience, because when you're in the desert, like you say, you need any source of water or any evidence that you might get out of the desert.
So that gave us confidence that if we pushed hard enough in this direction, even though no one was really doing that, that eventually this should work, right? Because we have the existence proof of the human mind. And of course, that's why I also studied neuroscience, because when you're in the desert, like you say, you need any source of water or any evidence that you might get out of the desert.
So that gave us confidence that if we pushed hard enough in this direction, even though no one was really doing that, that eventually this should work, right? Because we have the existence proof of the human mind. And of course, that's why I also studied neuroscience, because when you're in the desert, like you say, you need any source of water or any evidence that you might get out of the desert.
There's even a mirage in the distance. is a useful thing to understand in terms of giving you some direction when you're in the midst of that desert. And of course, AI was itself in the midst of that because several times this had failed. The expert system approach basically had reached a ceiling.
There's even a mirage in the distance. is a useful thing to understand in terms of giving you some direction when you're in the midst of that desert. And of course, AI was itself in the midst of that because several times this had failed. The expert system approach basically had reached a ceiling.
There's even a mirage in the distance. is a useful thing to understand in terms of giving you some direction when you're in the midst of that desert. And of course, AI was itself in the midst of that because several times this had failed. The expert system approach basically had reached a ceiling.
Well, look, the reason Go was considered to be and ended up being so much harder than chess, so it took another 20 years, even us with AlphaGo, and all the approaches that have been taken with chess, these expert systems approaches had failed with Go, right? Basically couldn't even be a professional, let alone a world champion. And the reason was two main reasons.
Well, look, the reason Go was considered to be and ended up being so much harder than chess, so it took another 20 years, even us with AlphaGo, and all the approaches that have been taken with chess, these expert systems approaches had failed with Go, right? Basically couldn't even be a professional, let alone a world champion. And the reason was two main reasons.
Well, look, the reason Go was considered to be and ended up being so much harder than chess, so it took another 20 years, even us with AlphaGo, and all the approaches that have been taken with chess, these expert systems approaches had failed with Go, right? Basically couldn't even be a professional, let alone a world champion. And the reason was two main reasons.
One is the complexity of Go is so enormous. One way to measure that is there are 10 to the power 170 possible positions, far more than atoms in the universe. There's no way you can brute force a solution to Go. It's impossible. But even harder than that is that it's such a beautiful, esoteric, elegant game. It's considered art. an art form in Asia, really, right?
One is the complexity of Go is so enormous. One way to measure that is there are 10 to the power 170 possible positions, far more than atoms in the universe. There's no way you can brute force a solution to Go. It's impossible. But even harder than that is that it's such a beautiful, esoteric, elegant game. It's considered art. an art form in Asia, really, right?
One is the complexity of Go is so enormous. One way to measure that is there are 10 to the power 170 possible positions, far more than atoms in the universe. There's no way you can brute force a solution to Go. It's impossible. But even harder than that is that it's such a beautiful, esoteric, elegant game. It's considered art. an art form in Asia, really, right?
And it's because it's both beautiful aesthetically, but also it's all about patterns rather than sort of brute calculation, which chess is more about. And so even the best players in the world can't really describe to you very clearly what are the heuristics they're using. They just kind of intuitively feel the right moves, right?
And it's because it's both beautiful aesthetically, but also it's all about patterns rather than sort of brute calculation, which chess is more about. And so even the best players in the world can't really describe to you very clearly what are the heuristics they're using. They just kind of intuitively feel the right moves, right?
And it's because it's both beautiful aesthetically, but also it's all about patterns rather than sort of brute calculation, which chess is more about. And so even the best players in the world can't really describe to you very clearly what are the heuristics they're using. They just kind of intuitively feel the right moves, right?
I think it's important that it's not just a hundred square miles of patch of California. I do actually think it's important that we get these other inputs, the broader inputs, not just geographically, but also different subjects, philosophy, social sciences, economists, not just the tech companies, not just the scientists involved in deciding how this gets built and what it gets used for.
I think it's important that it's not just a hundred square miles of patch of California. I do actually think it's important that we get these other inputs, the broader inputs, not just geographically, but also different subjects, philosophy, social sciences, economists, not just the tech companies, not just the scientists involved in deciding how this gets built and what it gets used for.
I think it's important that it's not just a hundred square miles of patch of California. I do actually think it's important that we get these other inputs, the broader inputs, not just geographically, but also different subjects, philosophy, social sciences, economists, not just the tech companies, not just the scientists involved in deciding how this gets built and what it gets used for.
They'll sometimes just say that this move, why did you play this move? Well, it felt right, right? And then it turns out their intuition, if they're a brilliant player, their intuition is brilliant, fantastic. And it's an amazingly beautiful and effective move. But that's very difficult then to encapsulate in a set of heuristics and rules that to direct how a machine should play go.
They'll sometimes just say that this move, why did you play this move? Well, it felt right, right? And then it turns out their intuition, if they're a brilliant player, their intuition is brilliant, fantastic. And it's an amazingly beautiful and effective move. But that's very difficult then to encapsulate in a set of heuristics and rules that to direct how a machine should play go.
They'll sometimes just say that this move, why did you play this move? Well, it felt right, right? And then it turns out their intuition, if they're a brilliant player, their intuition is brilliant, fantastic. And it's an amazingly beautiful and effective move. But that's very difficult then to encapsulate in a set of heuristics and rules that to direct how a machine should play go.
And so that's why all of these kind of deep blue methods didn't work. Now, we got around that by having the system learn for itself what are good patterns, what are good moves, what are good motifs and approaches, and what are kind of valuable and high probability of winning positions are.
And so that's why all of these kind of deep blue methods didn't work. Now, we got around that by having the system learn for itself what are good patterns, what are good moves, what are good motifs and approaches, and what are kind of valuable and high probability of winning positions are.
And so that's why all of these kind of deep blue methods didn't work. Now, we got around that by having the system learn for itself what are good patterns, what are good moves, what are good motifs and approaches, and what are kind of valuable and high probability of winning positions are.
So it kind of learned that for itself through experience, through seeing millions of games and playing millions of games against itself. So that's how we got AlphaGo to be better than world champion level. But the additional exciting thing about that is that it means those kinds of systems can actually go beyond what we as the programmers or the system designers know how to do.
So it kind of learned that for itself through experience, through seeing millions of games and playing millions of games against itself. So that's how we got AlphaGo to be better than world champion level. But the additional exciting thing about that is that it means those kinds of systems can actually go beyond what we as the programmers or the system designers know how to do.
So it kind of learned that for itself through experience, through seeing millions of games and playing millions of games against itself. So that's how we got AlphaGo to be better than world champion level. But the additional exciting thing about that is that it means those kinds of systems can actually go beyond what we as the programmers or the system designers know how to do.
No expert system can do that because, of course, it's strictly limited by what we already know and can describe to the machine. But these systems can learn for themselves. And that's what resulted in Move 37 in Game 2 of the famous World Championship match, the challenge match we had against Lee Sedol in Seoul in 2016. And that was a truly creative move. Go has been played for thousands of years.
No expert system can do that because, of course, it's strictly limited by what we already know and can describe to the machine. But these systems can learn for themselves. And that's what resulted in Move 37 in Game 2 of the famous World Championship match, the challenge match we had against Lee Sedol in Seoul in 2016. And that was a truly creative move. Go has been played for thousands of years.
No expert system can do that because, of course, it's strictly limited by what we already know and can describe to the machine. But these systems can learn for themselves. And that's what resulted in Move 37 in Game 2 of the famous World Championship match, the challenge match we had against Lee Sedol in Seoul in 2016. And that was a truly creative move. Go has been played for thousands of years.
It's the oldest game humans have invented, and it's the most complex game. And it's been played professionally for hundreds of years in places like Japan. And even still, even despite all of that exploration by brilliant human players, This Move 37 was something never seen before. And actually, worse than that, it was thought to be a terrible strategy.
It's the oldest game humans have invented, and it's the most complex game. And it's been played professionally for hundreds of years in places like Japan. And even still, even despite all of that exploration by brilliant human players, This Move 37 was something never seen before. And actually, worse than that, it was thought to be a terrible strategy.
It's the oldest game humans have invented, and it's the most complex game. And it's been played professionally for hundreds of years in places like Japan. And even still, even despite all of that exploration by brilliant human players, This Move 37 was something never seen before. And actually, worse than that, it was thought to be a terrible strategy.
In fact, if you go and watch the documentary, which I recommend, it's on YouTube now, of AlphaGo, you'll see that the professional commentators nearly fell off their chairs when they saw Move 37 because they thought it was a mistake. They thought the computer operator, Adger, had misclicked on the computer because it was so unthinkable that someone would play that.
In fact, if you go and watch the documentary, which I recommend, it's on YouTube now, of AlphaGo, you'll see that the professional commentators nearly fell off their chairs when they saw Move 37 because they thought it was a mistake. They thought the computer operator, Adger, had misclicked on the computer because it was so unthinkable that someone would play that.
In fact, if you go and watch the documentary, which I recommend, it's on YouTube now, of AlphaGo, you'll see that the professional commentators nearly fell off their chairs when they saw Move 37 because they thought it was a mistake. They thought the computer operator, Adger, had misclicked on the computer because it was so unthinkable that someone would play that.