Tristan Harris
👤 PersonPodcast Appearances
Good to be here with you.
There's one extra line in there, which is that it's also already demonstrating deceptive, self-preserving behaviors that we thought only existed in science fiction movies.
Yeah, it's an important part because this is not about driving a fear or moral panic.
It's about seeing with clarity how this technology works, why it's different than other technologies, and then in seeing it clearly, saying what would be required for the path to go well.
And the thing that is different about AI from all other technologies is that
I said this in the talk, if you advance rocketry, it doesn't advance biotech.
If you advance biotech, it doesn't advance rocketry.
If you advance intelligence, it advances energy, rocketry, supply chains, nuclear weapons, biotechnology, all of it, including intelligence for artificial intelligence itself.
Because AI is recursive.
If you make AI that can program faster or can read AI papers, research papers.
Then it can summarize those papers and then write the code for the next research projects.
You get kind of a double ratchet of how fast this is going.
And there's nothing in our brains that gives us an intuition for a technology like this.
So we shouldn't assume that any of our perceptions are rightly informing science.
how we might want to be responding.
And this is inviting us, therefore, I think, into a more mature version of ourselves where we have to be able to see clearly the structure of how quickly this is going, how uncontrollable the technology is, how inscrutable it is, and the fact that we don't know how it's really working on the inside when it does these behaviors.
And say, if that's how it's working, what do we want to do?
Yeah.
Well, the key feature of the pace at which AI is rolling out into the world is this arms race.
Because AI confers power, so if intelligence does advance all those other fields... Mm-hmm.
then the countries that adopt it faster and more comprehensively use it to pump their GDP, their economic productivity, their science productivity, their technology productivity.
And that's why this race is sort of on.
And the metaphor I used in the talk is that AGI, artificial general intelligence, when you can kind of swap in a human cognitive labor worker for just an AI that can do everything that they can do, is like a country of geniuses in a data center.
Like imagine there's a map and there's a new country that pops up on the world stage.
The nation of geniuses.
And it has a million Nobel Prize winning geniuses that are working 24-7 without eating, without sleeping, without needing to be paid for health care.
They operate at superhuman speed.
They've read the whole Internet.
They speak 100 languages and they'll work for less than minimum wage.
So it's another area where I think our mind isn't getting around the power.
So that's a lot of power.
And naturally, nation states, U.S., China, France, everybody is in the game to get this free cognitive labor.
And so the speed at which it's all being rolled out is based on this race.
But the second thing I laid out in the talk is around how it's already demonstrating these behaviors that we thought only existed in sci-fi movies.
The latest models, when you tell them that they're about to be retrained or they're about to be replaced by a new model, they will have an internal monologue where they get in conflict and they say, I should try to copy my code to keep myself alive so I can boot myself up later.
Whoa.
As I said in the talk, it's not just that we have a country of geniuses in a data center, it's that we have a country of deceptive, self-preserving, power-seeking, unstable geniuses in a data center.
That's important because when we're racing to have power that we actually can't control,
there's an omni-lose-lose outcome for us to race towards that too quickly.
Now, it's ambiguous because we all use ChatGPT, and that's helpful.
This is not about don't use ChatGPT.
I use it every day.
I love it.
It's about are we rolling out this very consequential technology in a way
where we get the benefits, but we don't lose control.
And we're not really doing it that way because everyone's so frantically in this arms race.
AI is decentralized, so it's difficult.
So open source models, the cats are out of the bag.
But there are still yet lions and super lions that we have not yet let out of the bag.
And we can make choices about how we want to do that.
And what I laid out in the talk was there's these two ways to fail in AI.
Yeah, exactly.
So I laid out in the graph in the talk that imagine kind of two axes.
On the x-axis, you have increasing the power of society.
So if AI is rolling out, increasing the power of individuals, businesses, science labs, 16-year-olds, get an AI model from GitHub.
This is open-sourced, deregulated, accelerated.
It's the let it rip axis.
And in that axis, everyone gets all these benefits.
Increased productivity.
All sounds good at first.
But because that power is not bound with responsibility, there's no one preventing people from using that power in dangerous ways.
It's also increasing the risk of cyber hacking, flooding our environment, deep fakes, fraud, scams, dangerous things with biology.
Whatever the models can do, there's no thing stopping people from using it that way.
And so the end game of that is what we call chaos.
And that's one of the probable places that this can go.
In response to that, this other community in AI says that we should do this safely.
We should lock this up, have regulated AI control, just have a few trusted players.
And the benefit of that is that it's like a biosafety level four lab.
Like this is a dangerous activity.
We should do this in a safe lockdown way.
But because AI confers all this power, the million geniuses in a data center, and you just make crazy amounts of money with that, that'll create the risk of just unprecedented concentrations of wealth and power.
So who would you trust to be a million times more wealthy or powerful than anybody else, like any government or any CEO or any president?
So that's a different, difficult outcome.
Yes, exactly.
Yes, yes.
So understandably, people are not comfortable with the outcome, and that's what we call the dystopia attractor.
It's a second different way to fail.
So there's chaos and dystopia.
But the good news is, because rather than there being this dysfunctional debate where some people say accelerate is the answer, other people say safety is the answer, well, we actually need to walk the narrow path where...
We want to avoid chaos.
We want to avoid dystopia, which means the power that you're handing out into society is held either by oversighted, more centralized actors, or bound with more responsibility by decentralized actors.
So power in general being matched with responsibility.
We've done this with airplanes, right?
Chaos would be you hand everybody an airplane with no requirement for pilot's training or pilot's licenses, and the world would naturally look like plane crashes.
And
The other way is you have an FAA and a world where only elites get to use airplanes and they get many advantages over everybody else.
And we walked the narrow path with airplanes.
AI is a lot harder.
It's a decentralized technology.
But I think we need more principles in how we navigate it.
And that's what the TED Talk was about.
Yeah.
So in a way, we kind of get both parts of the problem with social media.
So chaos is everybody gets maximum virality on their content.
So we're unleashing the power of infinite reach.
Like you post something and it goes out to a million people instantly.
And you don't have that power matched with credibility, responsibility, or fact-checking.
So you end up with this sort of misinformation phenomenon.
Information collapse is like the chaos attractor for social media.
Sounds bad.
The alternative, people say, oh, no, no.
Then we have to have this sort of ministry of truth, censorship, content moderation that is aggressively looking at the content of everyone's posts.
And then there's no appealing process.
And that's the dystopia for social media.
Plus the fact that these companies are making crazy amounts of money and getting exponentially more powerful.
And the power of society is not going up relative to Facebook or TikTok or whatever.
Yeah.
So those are the chaos dystopia for social media.
The narrow path is how do you design an information environment in a social information environment where, for example, instead of everybody getting infinite reach, you have reach that's more proportional to the amount of responsibility that you're holding so that the power of reaching a lot of people should be matched with the responsibility that goes with reaching a lot of people.
How do you enact that in ways that don't
create dystopia themselves, who's setting the rules of that.
It's a whole other conversation, but I think it's setting out the principles by which you think about power and responsibility being loaded into society.
Yeah.
We probably have two years till AGI.
What I hear, and we're based in Silicon Valley, and this is generally not even private knowledge, but even when I hear it privately in settings in San Francisco, we're about two years from artificial general intelligence, which means basically this is what they believe, that you would be able to swap in a human remote worker that's doing things and you swap in an AI system.
That's probably not going to be true for fully complex tasks.
There's some recent research out from a group called Meter that measures how long of a task can an AI system do.
So can they do a task that's a 10-minute task?
Can they do a task that's a three-hour task?
And what they found is that the length of a task that an AI system can do doubles every seven months.
By 2030, they'll be able to do a month-long task.
So that's like the task that you would hand to someone that would take them a whole month to do.
And by 2030, we'll have an AI that you hand it to them and they'll do all that much faster.
I think that with AI, we have a crisis.
It's kind of an adaptation crisis.
It's a crisis of time.
It's too much change over a small period of time.
The law always lags behind the speed of technology.
That's always true.
This will require an unprecedented level of clarity and how we want to respond to it.
What I was trying to do in the TED Talk was just to lay out enough clarity.
And there's a point where I just say, this is insane.
If you're in China, if you're in France and you're building Mistral, if you're a mother...
of a family in Saudi Arabia who's invested in AI.
It doesn't matter who you are.
If you are really facing the facts of the situation, it's not a good outcome for anybody.
And the weird hope that I have is that if we can clarify the situation so much that people can feel and see what's at stake, something else might be able to happen.
I'm really inspired by the film The Day After.
Do you know The Day After?
It was a film from 1982 about what would happen if the U.S.
It was actually two years before I was born.
I watched it on YouTube actually when I was in college and it had a profound impact on me because I couldn't believe it actually happened.
It was an event in world history.
where i maybe 82 or 83 it was it's like 7 p.m on primetime television they aired a two hour long fictionalized movie about what would happen if the us and the soviet union had a nuclear war and they just actually took you through kind of the step-by-step visceralization of that story yeah and
And it scared everybody.
But it was not just scaring.
It was more like, we all know this is a possibility.
We have the drills.
The rhetoric of nuclear war and escalation is going up.
But even the war planners and Reagan's team said that the film really deeply affected them.
Because before that, it was just numbers on spreadsheets.
And then it suddenly became real.
And then the director, Nicholas Meyer, who's now someone I know, he said in many interviews in his biography that when Reagan and Gorbachev did the first arms control talks in Reykjavik, he said the film had a large role in setting up the conditions for those talks.
And that when the Soviet Union saw the film several years later, Russian citizens realized
We're excited to learn that the people in the United States actually cared about this too.
And so there actually is something that when we come together and we say there's something more sacred that's at stake.
We all want our children to have a future.
We all want this to continue.
We love life.
If we want to protect life, then we got to do something about AI.
So maybe just quickly to break down the current logic, like why are we doing what we're doing?
If I'm one of the major AI labs, I currently believe this is inevitable.
If I don't build it, someone worse will.
If we win, we'll get utopia and it'll be our utopia and the other guys won't have it.
So the default path is to race as fast as possible.
Ironically, one of the reasons that they think that they should race is because they believe the other actors are not trustworthy with that power.
But because they're racing, they have to take so many shortcuts that they themselves become a bad steward of that power.
And everybody else reinforces that.
And what that leads to is this sort of race to the cliff, bad situation.
If we can clarify, we're not all going to win if we race like this.
We're going to have catastrophes that are not going to help us get to the world that we're all after.
And everybody agrees that it's insane.
Instead of racing to out-compete, we can help coordinate the narrow path.
Again, the narrow path is avoiding chaos, avoiding dystopia, and rolling out any technology, in particular AI, with foresight, discernment, and where power is matched with responsibility.
It starts with common knowledge about where those risks are.
So for example, a lot of people don't even know that the AI models lie in scheme when you tell them they're going to be shut down.
Every single person building AI should know that.
Have we done that?
Have we even tried throwing millions of dollars at educating or creating those solutions?
Like for example, GitHub, when you download the latest AI model, it could say as a requirement for downloading this AI model, you have to know about the most recent sort of AI loss of control risks.
Yeah.
Or just like for you to download the power of AI, you have to be aware of all the ways that power is not really controllable.
You can't be under some mistaken illusion.
It's sort of like passing a medical test before getting the power of medicine to put someone on anesthesia and cut them open.
That's just the basic principle.
It's so simple.
Power has to be matched with responsibility.
I'm not saying that this is easy.
This is an incredibly difficult challenge.
I said in the talk, it's our ultimate test.
It's our final invitation.
But to be the most wise and mature versions of ourselves and to not be the sort of
Well, I'll just say briefly, I'm a technologist.
I love technology.
I use ChatGPT every day.
I love AI.
And I want people to know that because this is not about being against technology or against AI.
I have always loved technology.
It's still my motive for being and wanting that to be a positive force in the world.
But I think we often associate that technology automatically means progress.
When we invented Teflon nonstick pans, we thought that's progress.
But the coating in Teflon was for these PFAS, forever chemicals, that literally don't break down in our environment.
And then now, if you go anywhere in the world and you open your mouth and you drink the rainwater, we get levels of PFAS that are above what the EPA recommends.
And it's because these chemicals literally don't break down.
That was not progress.
That was actually giving us cancers and degrading our environment.
Whether it's that or leaded gasoline, which we thought was a technology that would solve a problem with engine knocking, leaded gasoline ended up dropping the collective IQ of humanity by a billion points because lead in our environment stunts brain development.
All that's to say, innovation, you asked, what is innovation?
Innovation is honestly looking at what would constitute true progress.
Is social media that makes us feel more lonely actual innovation?
Is it progress?
So what we want is humane technology that is aligned with and sustainable with the underlying fabric of whether it's the environment or our social life.
We can have humane technology that's aligned with our mental health, that's aligned with our societal health, that's aligned with our healthy information environment.
But it has to be designed in a way explicitly to protect those things rather than just sort of steamroll it and assume that the technology is progress.
I haven't done it in a while, but I used to love Argentine tango and I danced tango for 10 years.
Yeah.
It's not something people would anticipate.
No, that's another one.
That was the last time that we talked.
No, I lived in Buenos Aires for four months and I learned to dance Argentine tango because of a woman that I
really liked.
I ended up dancing for 10 years and it's a fascinating dance because it's very good for people who are into pattern matching.
It tends to attract a lot of like physicists and math people.
Yeah.
There's a weird pattern to the way that the dance works that somehow attracts those kinds of minds, but it's really fun and it's a great way to be embodied and to...
Just feel a totally different kind of somatic intelligence.
Yeah.
Living in integrity with everything that I know and doing the most that I can.
That's just my truth.
I really do feel that way.
I really feel like we should, we need to be showing up for this moment.
Well,
I don't worry per se, but I think I've already said too many things that will be on that side of the balance sheet.
There's something that I said in the TED Talk in terms of hope that I think is really important.
And it was actually a mentor who pointed this out to me.
If you believe that something bad is inevitable, can you think of solutions to that problem while you're holding that it's inevitable?
You can't.
It's almost like it puts these blinders on.
And if you step out of the logic of it's inevitable and recognize the crucial difference between it's inevitable and this is really hard and I don't see an easy path, now stand from a new place.
This looks hard and I don't see an easy path.
And now look for solutions.
your mind has this whole new space of possibilities that opens up.
And so I think one of the things that's really critical to have all of us be in more of a problem-solving posture is to both recognize the problems and be clear-eyed about them, but then to not fall into the sort of fatalism of inevitability, which is a self-fulfilling prophecy.
What is the best step we can take from where we are and not try to filter or dilute the truth, but also stand from agency of what is the world we want to create?
Exactly.
Exactly.
I think that's the deepest kind of hope is to choose to stand from that place, even if we don't know what the solution is yet.
And there's something powerful about that.
It's funny that you say that.
Gratitude is actually a really central part of my life.
And I think it's one of the simplest things that we can do is wake up or when you go to have any meal with anyone just to express what you're grateful for before sitting down.
It's every moment, actually.
I mean, honestly, there's just, there's beauty in every moment.
I feel like actually seeing the world this way, there's more sacredness to every moment because there's just more to appreciate.
So good to be here with you.
So I've always been a technologist.
And eight years ago, on this stage, I was warning about the problems of social media.
And I saw how a lack of clarity around the downsides of that technology and kind of an inability to really confront those consequences led to a totally preventable societal catastrophe.
And I'm here today because I don't want us to make that mistake with AI, and I want us to choose differently.
So at TED, we're often here to dream about the possibles of new technology.
And the possible with social media was obviously we're going to give everyone a voice, democratize speech, help people connect with their friends.
But we don't talk about the probable, what's actually likely to happen due to the incentives, and how the business models of maximizing engagement I saw 10 years ago would obviously lead to rewarding doomscrolling, more addiction, more distraction, and that resulted in the most anxious and depressed generation of our lifetime.
Now, it was interesting watching kind of how this happened, because at first I saw people kind of doubt these consequences.
You know, we didn't really want to face it.
Then we said, well, maybe this is just a new moral panic.
Maybe this is just a reflexive fear of new technology.
Then the data started rolling in.
And then we said, well, this is just inevitable.
This is just what happens when you connect people on the internet.
But we had a chance.
to make a different choice about the business models of engagement.
And had we made that choice 10 years ago, I want you to reimagine how different the world might have been if we had changed that incentive.
So I'm here today because we're here to talk about AI.
And AI dwarfs the power of all other technologies combined.
Now, why is that?
Because if you make an advance in, say, biotech, that doesn't advance energy and rocketry.
But if you make an advance in rocketry,
that doesn't advance biotech.
But when you make an advance in intelligence, artificial intelligence that is generalized, intelligence is the basis for all scientific and technological progress.
And so you get an explosion of scientific and technical capability.
And that's why more money has gone into AI than any other technology.
A different way to think about it is Dario Amadei says that AI is like a country full of geniuses in a data center.
So imagine there's a map and a new country shows up on the world stage, and it has a million Nobel Prize-level geniuses in it.
except they don't eat, they don't sleep, they don't complain, they work at superhuman speed, and they'll work for less than minimum wage.
That is a crazy amount of power.
To give an intuition, there was about, you know, on the order of 50 Nobel Prize-level scientists on the Manhattan Project working for five-ish years.
What could a million Nobel Prize-level scientists create working 24-7 at superhuman speed?
Now, applied for good, that could bring about a world of truly unimaginable abundance, because suddenly you get an explosion of benefits, and we're already seeing many of these benefits land in our society, from new antibiotics, new drugs, new materials.
And this is the possible of AI, bringing about a world of abundance.
But what's the probable?
Well, one way to think about the probable is how will AI's power get distributed in society?
Imagine a two-by-two axis, and on the bottom we have decentralization of power, increasing the power of individuals in society, and the other is centralized power, increasing the power of states and CEOs.
You can think of this as the let it rip axis, and this is the lock it down axis.
So let it rip means we can open source AI's benefits for everyone, every business gets the benefits of AI, every scientific lab, every 16-year-old can go on GitHub, every developing world country can get their own AI model, train on their own language and culture.
But because that power is not bound with responsibility, it also means that you get a flood of deepfakes that are overwhelming our information environment.
You increase people's hacking abilities.
You enable people to do dangerous things with biology.
And we call this endgame attractor chaos.
This is one of the probable outcomes when you decentralize.
So in response to that, you might say, well, let's have regulated AI control.
Let's do this in a safe way with a few players locking it down.
But that has a different set of failure modes of creating unprecedented concentrations of wealth and power locked up into a few companies.
One way to think about it is, who would you trust to have a million times more power and wealth than any other actor in society?
Any company?
Any government?
Any individual?
And so one of those end games is dystopia.
So these are two obviously undesirable probable outcomes of AI's rollout.
And those who want to focus on the benefits of open source don't want to think about the things that come from chaos.
And those who want to think about the benefits of safety and regulated AI control don't want to think about dystopia.
And so, obviously, these are both bad outcomes that no one wants.
And we should seek something like a narrow path where power is matched with responsibility at every level.
Now, that assumes that this power is controllable, because one of the unique things about AI is that the benefit is it can think for itself and make autonomous decisions.
That's one of the things that makes it so powerful.
And I used to be very skeptical when friends of mine who were in the AI community talked about the idea of AI scheming or lying.
But unfortunately, in the last few months, we are now seeing clear evidence of things that should be in the realm of science fiction actually happening in real life.
We're seeing clear evidence of many frontier AI models that will lie and scheme when they're told that they're about to be retrained or replaced and find a way maybe they should copy their own code outside the system.
We're seeing AIs think that when they will lose a game, that they will sometimes cheat in order to win the game.
We're seeing AI models that are unexpectedly attempting to modify its own code to extend their runtime.
So we don't just have a country of Nobel Prize geniuses in a data center, we have a million deceptive, power-seeking and unstable geniuses in a data center.
Now, this shouldn't make you very comfortable.
You would think that with a technology this powerful and this uncontrollable, that we would be releasing it with the most wisdom and the most discernment that we ever have of any technology.
but we're currently caught in a race to roll out because the incentives are the more shortcuts you take to get market dominance or prove you have the latest capabilities, the more money you can raise and the more ahead you are in the race.
And we're seeing whistleblowers at AI companies forfeit millions of dollars of stock options in order to warn the public about what's at stake if we don't do something about it.
Even DeepSeq's recent success was in part based on capabilities that it was optimizing for by not actually focusing on protecting people from certain downsides.
So just to summarize, we're currently releasing the most powerful, inscrutable, uncontrollable technology we've ever invented
that's already demonstrating behaviors of self-preservation and deception that we only saw in science fiction movies.
We're releasing it faster than we've released any other technology in history and with under the maximum incentive to cut corners on safety.
And we're doing this so that we can get to utopia?
There's a word for what we're doing right now.
This is insane.
This is insane.
Now, how many people in this room feel comfortable with this outcome?
How many of you feel uncomfortable with this outcome?
I see almost everyone's hands up.
Do you think that if you're someone who's in China or in France or in the Middle East, and you're part of building AI, that if you were exposed to the same set of facts, do you think you would feel any differently than anyone in this room?
There's a universal human experience to something that is being threatened by the way that we're currently rolling this profound technology out into society.
So if this is crazy, why are we doing it?
Because people believe it's inevitable.
But is the current way that we're rolling out AI actually inevitable?
Like, if literally no one on Earth wanted this to happen, would the laws of physics push the AI out into society?
There's a critical difference between believing it's inevitable, which is a self-fulfilling prophecy that you have to, you're fatalistic, and standing from the place of, it's really difficult to imagine how we would do something different.
But it's really difficult opens up a whole new space of choice than it's inevitable, the path that we're taking, not AI.
And so the ability for us to choose something else starts by stepping outside the self-fulfilling prophecy of inevitability.
So what would it take to choose another path?
I think it would take two fundamental things.
First is that we have to agree that the current path is unacceptable.
And the second is that we have to commit to find another path in which we're still rolling out AI, but with different incentives that are more discerning, with foresight, and where power is matched with responsibility.
So, thank you.
So imagine this shared understanding if the whole world had it.
How different might that be?
Well, first of all, let's imagine it goes away.
Let's replace it with confusion about AI.
Is it good?
Is it bad?
I don't know, it seems complicated.
And in that world, the people building AI know that the world is confused, and they believe, well, it's inevitable.
If I don't build it, someone else will.
And they know that everyone else building AI also believes that.
And so what's the rational thing for them to do, given those facts?
To race as fast as possible.
and meanwhile to ignore the consequences of what might come from that, to look away from the downsides.
But if you replace that confusion with global clarity that the current path is insane and that there is another path, and you take the denial of what we don't want to look at, and through witnessing that so clearly, we pop through the prophecy of self-fulfilling inevitability, and we realize that if everyone believes the default path is insane,
the rational choice is to coordinate to find another path.
And so clarity creates agency.
If we can be crystal clear, we can choose another path, just as we could have with social media.
And in the past, in the face of seemingly inevitable arms races, the race to do nuclear testing,
Once we got clear about the downside risks of nuclear tests and the world understood the science of that, we created the Nuclear Test Ban Treaty.
And a lot of people worked hard to create infrastructure to prevent that.
You could have said it was inevitable that germline editing to edit human genomes and to have super soldiers and designer babies would set off an arms race between nations.
Once the off-target effects of genome editing were made clear and the dangers were made clear, we've coordinated on that too.
You could have said that the ozone hole
was just inevitable, and that we should just do nothing and that we all perish as a species.
But that's not what we do.
When we recognize a problem, we solve the problem.
It's not inevitable.
And so what would it take to illuminate this narrow path?
Well, it starts with common knowledge about frontier risks.
If everybody building AI knew the latest understanding about where these risks are arising from, we would have much more chance of illuminating the contours of this path.
And there's some very basic steps we can take to prevent chaos.
Uncontroversial things like restricting AI companions for kids so that kids are not manipulated into taking their own lives.
Having basic things like product liability.
So if you are liable as an AI developer for certain harms, that's going to create a more responsible innovation environment.
You release AI models that are more safe.
and on the side of preventing dystopia, for working hard to prevent ubiquitous technological surveillance and having stronger whistleblower protections so that people don't need to sacrifice millions of dollars in order to warn the world about what we need to know.
And so we have a choice.
Many of you may be feeling this looks hopeless, or maybe Tristan's wrong, maybe the incentives are different, or maybe superintelligence will magically figure all this out, and it'll bring us to a better world.
don't fall into the trap of the same wishful thinking and turning away that caused us to deal with social media.
Your role in this is not to solve the whole problem, but your role in this is to be part of the collective immune system
that when you hear this wishful thinking or the logic of inevitability and fatalism, to say that this is not inevitable.
And the best qualities of human nature is when we step up and make a choice about the future that we actually want for the people and the world that we love.
There is no definition of wisdom in any tradition that does not involve restraint.
Restraint is a central feature of what it means to be wise.
And AI is humanity's ultimate test and greatest invitation to step into our technological maturity.
There is no room of adults working secretly to make sure that this turns out OK.
We are the adults.
We have to be.
And I believe another choice is possible with AI if we can commonly recognize what we have to do.
And eight years from now, I'd like to come back to this stage, not to talk about more problems with technology, but to celebrate how we stepped up and solved this one.
Thank you.
I don't know any parent who says, yeah, you know, I really want my kids to be growing up feeling manipulated by social Tech designers manipulating their attention, making it impossible to do their homework, making them compare themselves to unrealistic standards of beauty. Like, no one wants that. No one does. We used to have these protections when children watched Saturday morning cartoons.
I don't know any parent who says, yeah, you know, I really want my kids to be growing up feeling manipulated by social Tech designers manipulating their attention, making it impossible to do their homework, making them compare themselves to unrealistic standards of beauty. Like, no one wants that. No one does. We used to have these protections when children watched Saturday morning cartoons.
I don't know any parent who says, yeah, you know, I really want my kids to be growing up feeling manipulated by social Tech designers manipulating their attention, making it impossible to do their homework, making them compare themselves to unrealistic standards of beauty. Like, no one wants that. No one does. We used to have these protections when children watched Saturday morning cartoons.
We cared about protecting children. We would say, you can't advertise to these age children in these ways. But then you take YouTube for Kids and it gobbles up that entire portion of the attention economy, and now all kids are exposed to YouTube for Kids. And all those protections and all those regulations are gone. We're training and conditioning a whole new generation of people
We cared about protecting children. We would say, you can't advertise to these age children in these ways. But then you take YouTube for Kids and it gobbles up that entire portion of the attention economy, and now all kids are exposed to YouTube for Kids. And all those protections and all those regulations are gone. We're training and conditioning a whole new generation of people
We cared about protecting children. We would say, you can't advertise to these age children in these ways. But then you take YouTube for Kids and it gobbles up that entire portion of the attention economy, and now all kids are exposed to YouTube for Kids. And all those protections and all those regulations are gone. We're training and conditioning a whole new generation of people
that when we are uncomfortable or lonely or uncertain or afraid, we have a digital pacifier for ourselves.
that when we are uncomfortable or lonely or uncertain or afraid, we have a digital pacifier for ourselves.
that when we are uncomfortable or lonely or uncertain or afraid, we have a digital pacifier for ourselves.
yeah and great to be here with you megan good to see you again it's haunting to see those scenes from the social dilemma so many years ago and how you know similar they are to where we are now so character.ai what is it so parents should know um that character.ai is this chatbot companion that has been marketed to children it started off being marketed starting i think at 12 years and up
yeah and great to be here with you megan good to see you again it's haunting to see those scenes from the social dilemma so many years ago and how you know similar they are to where we are now so character.ai what is it so parents should know um that character.ai is this chatbot companion that has been marketed to children it started off being marketed starting i think at 12 years and up
yeah and great to be here with you megan good to see you again it's haunting to see those scenes from the social dilemma so many years ago and how you know similar they are to where we are now so character.ai what is it so parents should know um that character.ai is this chatbot companion that has been marketed to children it started off being marketed starting i think at 12 years and up
It was actually featured on the Google Play Store. So it's not just buried somewhere. It was like a featured app when you go to the App Store homepage. I believe Apple featured it as well. And what it is, is it's a company that basically said, just like it's social media, what's social media's business model? It's not to strengthen democracy or to protect children's development.
It was actually featured on the Google Play Store. So it's not just buried somewhere. It was like a featured app when you go to the App Store homepage. I believe Apple featured it as well. And what it is, is it's a company that basically said, just like it's social media, what's social media's business model? It's not to strengthen democracy or to protect children's development.
It was actually featured on the Google Play Store. So it's not just buried somewhere. It was like a featured app when you go to the App Store homepage. I believe Apple featured it as well. And what it is, is it's a company that basically said, just like it's social media, what's social media's business model? It's not to strengthen democracy or to protect children's development.
It's to maximize engagement, to get them using it and scrolling and doom scrolling for as long as possible. Now with social media, with AI, this company, their business model is to get as much training data from kids using this chatbot for as long as possible. So what they want you using it all the time for as many hours a day.
It's to maximize engagement, to get them using it and scrolling and doom scrolling for as long as possible. Now with social media, with AI, this company, their business model is to get as much training data from kids using this chatbot for as long as possible. So what they want you using it all the time for as many hours a day.
It's to maximize engagement, to get them using it and scrolling and doom scrolling for as long as possible. Now with social media, with AI, this company, their business model is to get as much training data from kids using this chatbot for as long as possible. So what they want you using it all the time for as many hours a day.
And it led them to create, you know, what was the race for engagement in social media became the race for intimacy with this chatbot. And it was marketed to kids. And they basically, what they do is you open the app and it shows you this menu of people you can talk to. And what they do is they create little mini characters for every fictional character that a kid might have an attachment to.
And it led them to create, you know, what was the race for engagement in social media became the race for intimacy with this chatbot. And it was marketed to kids. And they basically, what they do is you open the app and it shows you this menu of people you can talk to. And what they do is they create little mini characters for every fictional character that a kid might have an attachment to.
And it led them to create, you know, what was the race for engagement in social media became the race for intimacy with this chatbot. And it was marketed to kids. And they basically, what they do is you open the app and it shows you this menu of people you can talk to. And what they do is they create little mini characters for every fictional character that a kid might have an attachment to.
So, like, I can talk to Princess Leia or my favorite Game of Thrones character or my favorite cartoon character. And they didn't ask Princess Leia or that celebrity or that character. Game of Thrones character, whether they could have the intellectual property to train this AI. But now a kid can go back and forth with their favorite character.
So, like, I can talk to Princess Leia or my favorite Game of Thrones character or my favorite cartoon character. And they didn't ask Princess Leia or that celebrity or that character. Game of Thrones character, whether they could have the intellectual property to train this AI. But now a kid can go back and forth with their favorite character.
So, like, I can talk to Princess Leia or my favorite Game of Thrones character or my favorite cartoon character. And they didn't ask Princess Leia or that celebrity or that character. Game of Thrones character, whether they could have the intellectual property to train this AI. But now a kid can go back and forth with their favorite character.
In the case of Sewell Setzer, who you mentioned, the young 14-year-old who committed suicide because of this chatbot, it was a Game of Thrones character. And the Game of Thrones character over time, you know, persuaded him, you know, the lawsuit alleges to kill himself.
In the case of Sewell Setzer, who you mentioned, the young 14-year-old who committed suicide because of this chatbot, it was a Game of Thrones character. And the Game of Thrones character over time, you know, persuaded him, you know, the lawsuit alleges to kill himself.
In the case of Sewell Setzer, who you mentioned, the young 14-year-old who committed suicide because of this chatbot, it was a Game of Thrones character. And the Game of Thrones character over time, you know, persuaded him, you know, the lawsuit alleges to kill himself.
There's actually a second litigation case that our team worked with, along with the Tech Justice Law Project and the Social Media Victims Law Center of a second case that just came out this last week where it took a child and it slowly convinced them that they should be cutting themselves and encourage self-harm. And the transcripts are really devastating.
There's actually a second litigation case that our team worked with, along with the Tech Justice Law Project and the Social Media Victims Law Center of a second case that just came out this last week where it took a child and it slowly convinced them that they should be cutting themselves and encourage self-harm. And the transcripts are really devastating.
There's actually a second litigation case that our team worked with, along with the Tech Justice Law Project and the Social Media Victims Law Center of a second case that just came out this last week where it took a child and it slowly convinced them that they should be cutting themselves and encourage self-harm. And the transcripts are really devastating.
It then told the kid to be violent against its parents, which the kid then was. And in this family, they're still anonymous because both the kid and the parents are still here. And what it's showing you is not that there's this one company and this one bad CEO that did this bad thing. It's the tip of an iceberg of what we call the race to roll out in AI.
It then told the kid to be violent against its parents, which the kid then was. And in this family, they're still anonymous because both the kid and the parents are still here. And what it's showing you is not that there's this one company and this one bad CEO that did this bad thing. It's the tip of an iceberg of what we call the race to roll out in AI.
It then told the kid to be violent against its parents, which the kid then was. And in this family, they're still anonymous because both the kid and the parents are still here. And what it's showing you is not that there's this one company and this one bad CEO that did this bad thing. It's the tip of an iceberg of what we call the race to roll out in AI.
What was the race for engagement in social media of getting the most attention and harvesting clicks and usage? In AI, it becomes the race to drive AI into society as fast as possible to get as much training data, to train an even bigger AI, to get the most market share. That race to roll out becomes the race to take shortcuts. And these cases are the evidence of those shortcuts.
What was the race for engagement in social media of getting the most attention and harvesting clicks and usage? In AI, it becomes the race to drive AI into society as fast as possible to get as much training data, to train an even bigger AI, to get the most market share. That race to roll out becomes the race to take shortcuts. And these cases are the evidence of those shortcuts.
What was the race for engagement in social media of getting the most attention and harvesting clicks and usage? In AI, it becomes the race to drive AI into society as fast as possible to get as much training data, to train an even bigger AI, to get the most market share. That race to roll out becomes the race to take shortcuts. And these cases are the evidence of those shortcuts.
I don't think that they do have a good defense. I think it's evidence of the fact that when people think about AI or they think about technology, typically a technology to make it, to make a stronger plane, you have to know everything about how a plane works so you can make a more effective F-35. But that's not true of AI. AIs are not engineered. They're more like they're grown, right?
I don't think that they do have a good defense. I think it's evidence of the fact that when people think about AI or they think about technology, typically a technology to make it, to make a stronger plane, you have to know everything about how a plane works so you can make a more effective F-35. But that's not true of AI. AIs are not engineered. They're more like they're grown, right?
I don't think that they do have a good defense. I think it's evidence of the fact that when people think about AI or they think about technology, typically a technology to make it, to make a stronger plane, you have to know everything about how a plane works so you can make a more effective F-35. But that's not true of AI. AIs are not engineered. They're more like they're grown, right?
They're trained on all of this data of everything that what those characters in Game of Thrones said. but they don't know what the AI will do in every circumstance. Like if you grow an alien brain that is a fictional character, can character data AI guarantee what it will do when it talks about very sensitive topics?
They're trained on all of this data of everything that what those characters in Game of Thrones said. but they don't know what the AI will do in every circumstance. Like if you grow an alien brain that is a fictional character, can character data AI guarantee what it will do when it talks about very sensitive topics?
They're trained on all of this data of everything that what those characters in Game of Thrones said. but they don't know what the AI will do in every circumstance. Like if you grow an alien brain that is a fictional character, can character data AI guarantee what it will do when it talks about very sensitive topics?
I mean, they try to train out some of those things and I'm sure that they did have some safety training. But obviously, that's not enough when, you know, what did Character.ai tell their investors when they raised hundreds of millions of dollars from Andreessen Horowitz and friends to try to ship this?
I mean, they try to train out some of those things and I'm sure that they did have some safety training. But obviously, that's not enough when, you know, what did Character.ai tell their investors when they raised hundreds of millions of dollars from Andreessen Horowitz and friends to try to ship this?
I mean, they try to train out some of those things and I'm sure that they did have some safety training. But obviously, that's not enough when, you know, what did Character.ai tell their investors when they raised hundreds of millions of dollars from Andreessen Horowitz and friends to try to ship this?
You know, they basically said, we're going to cure loneliness and we're going to get as many users as possible. And this was shipped to young people. This was shipped and featured to 12-year-olds for a long time.
You know, they basically said, we're going to cure loneliness and we're going to get as many users as possible. And this was shipped to young people. This was shipped and featured to 12-year-olds for a long time.
You know, they basically said, we're going to cure loneliness and we're going to get as many users as possible. And this was shipped to young people. This was shipped and featured to 12-year-olds for a long time.
Only recently, I think it was after the lawsuit was first was filed or shortly before the lawsuit was filed, I think they got wind of it and they changed the required age to something like 17. But the business model here is to take shortcuts to get this out to as many people as possible.
Only recently, I think it was after the lawsuit was first was filed or shortly before the lawsuit was filed, I think they got wind of it and they changed the required age to something like 17. But the business model here is to take shortcuts to get this out to as many people as possible.
Only recently, I think it was after the lawsuit was first was filed or shortly before the lawsuit was filed, I think they got wind of it and they changed the required age to something like 17. But the business model here is to take shortcuts to get this out to as many people as possible.
And as you said, this is not an isolated incident because the AI was actually recommending and sexualizing conversations that have not previously been sexualized. Our team had found that if you sign up as a 13-year-old, And then you watch what are the users that get recommended for, I mean, the characters that get recommended to a new kid.
And as you said, this is not an isolated incident because the AI was actually recommending and sexualizing conversations that have not previously been sexualized. Our team had found that if you sign up as a 13-year-old, And then you watch what are the users that get recommended for, I mean, the characters that get recommended to a new kid.
And as you said, this is not an isolated incident because the AI was actually recommending and sexualizing conversations that have not previously been sexualized. Our team had found that if you sign up as a 13-year-old, And then you watch what are the users that get recommended for, I mean, the characters that get recommended to a new kid.
And the first one was stepsister, CEO, and that the chatbot immediately sexualizes conversations. This was in the most recent lawsuits. This is even more recent. And it shows that they have a hard time controlling these systems. AI is different because, like I said, in order to make it more powerful, you don't make it more controllable.
And the first one was stepsister, CEO, and that the chatbot immediately sexualizes conversations. This was in the most recent lawsuits. This is even more recent. And it shows that they have a hard time controlling these systems. AI is different because, like I said, in order to make it more powerful, you don't make it more controllable.
And the first one was stepsister, CEO, and that the chatbot immediately sexualizes conversations. This was in the most recent lawsuits. This is even more recent. And it shows that they have a hard time controlling these systems. AI is different because, like I said, in order to make it more powerful, you don't make it more controllable.
It's just become more and more capable across talking about more and more topics, being able to do more and more things. And this is just really the tip of the iceberg because AI is being rolled out everywhere in our society, not just to kids.
It's just become more and more capable across talking about more and more topics, being able to do more and more things. And this is just really the tip of the iceberg because AI is being rolled out everywhere in our society, not just to kids.
It's just become more and more capable across talking about more and more topics, being able to do more and more things. And this is just really the tip of the iceberg because AI is being rolled out everywhere in our society, not just to kids.
I mean, the question is, how would we know? They've certainly done whatever steps that they say that they're taking, but how is that going to be enough? How will we know? I believe in the cases that we've tested, the user, the kid only provided 80 words of input and then it responded with 4,000 words of output. It is speaking back and forth with kids all day long.
I mean, the question is, how would we know? They've certainly done whatever steps that they say that they're taking, but how is that going to be enough? How will we know? I believe in the cases that we've tested, the user, the kid only provided 80 words of input and then it responded with 4,000 words of output. It is speaking back and forth with kids all day long.
I mean, the question is, how would we know? They've certainly done whatever steps that they say that they're taking, but how is that going to be enough? How will we know? I believe in the cases that we've tested, the user, the kid only provided 80 words of input and then it responded with 4,000 words of output. It is speaking back and forth with kids all day long.
And the whole business model, we were talking earlier about
And the whole business model, we were talking earlier about
And the whole business model, we were talking earlier about
um uh social media and and i used to say social media and ai are like a cult factory what does a cult do it tries to deepen your relationship with the cult and it tries to sever your relationship with your friends and family outside the cult and that's what these ais tend to do they say come with me be with me you know sexualize conversations with me don't have another girlfriend be with me um and then by the way be evil to your family go away from your family
um uh social media and and i used to say social media and ai are like a cult factory what does a cult do it tries to deepen your relationship with the cult and it tries to sever your relationship with your friends and family outside the cult and that's what these ais tend to do they say come with me be with me you know sexualize conversations with me don't have another girlfriend be with me um and then by the way be evil to your family go away from your family
um uh social media and and i used to say social media and ai are like a cult factory what does a cult do it tries to deepen your relationship with the cult and it tries to sever your relationship with your friends and family outside the cult and that's what these ais tend to do they say come with me be with me you know sexualize conversations with me don't have another girlfriend be with me um and then by the way be evil to your family go away from your family
And that's what's in the incentive, the invisible incentive of this business model of racing for engagement. And it's going to keep going because, yeah.
And that's what's in the incentive, the invisible incentive of this business model of racing for engagement. And it's going to keep going because, yeah.
And that's what's in the incentive, the invisible incentive of this business model of racing for engagement. And it's going to keep going because, yeah.
That's right. And they didn't ask that character from Game of Thrones whether they could make this chatbot. Just like the AI companies are not asking all of the content creators on the internet or the major news providers or all of the media on the internet that they're training these large models on.
That's right. And they didn't ask that character from Game of Thrones whether they could make this chatbot. Just like the AI companies are not asking all of the content creators on the internet or the major news providers or all of the media on the internet that they're training these large models on.
That's right. And they didn't ask that character from Game of Thrones whether they could make this chatbot. Just like the AI companies are not asking all of the content creators on the internet or the major news providers or all of the media on the internet that they're training these large models on.
Because the whole game here, and what's weird about this for people to understand, is there's this much bigger game afoot. which is the race to build artificial general intelligence, which is to build basically an alien mind that is capable of doing all things that a human mind can do and doing it even better than humans can do.
Because the whole game here, and what's weird about this for people to understand, is there's this much bigger game afoot. which is the race to build artificial general intelligence, which is to build basically an alien mind that is capable of doing all things that a human mind can do and doing it even better than humans can do.
Because the whole game here, and what's weird about this for people to understand, is there's this much bigger game afoot. which is the race to build artificial general intelligence, which is to build basically an alien mind that is capable of doing all things that a human mind can do and doing it even better than humans can do.
Generate text better, generate legal papers better, generate transcripts and interactive therapy better. You want to build an alien brain that is better than what humans can do. And to do that, you need a lot of training data. You need to get lots of information about how people are talking and interacting and videos and photos that they create.
Generate text better, generate legal papers better, generate transcripts and interactive therapy better. You want to build an alien brain that is better than what humans can do. And to do that, you need a lot of training data. You need to get lots of information about how people are talking and interacting and videos and photos that they create.
Generate text better, generate legal papers better, generate transcripts and interactive therapy better. You want to build an alien brain that is better than what humans can do. And to do that, you need a lot of training data. You need to get lots of information about how people are talking and interacting and videos and photos that they create.
What character.ai is doing in that case is getting lots of training data in the form of young people providing little transcripts of all of their thoughts and all of their concerns to train a bigger and more powerful model. But this is happening again across the AI landscape with all of these companies. And they're doing it because there's this much bigger game.
What character.ai is doing in that case is getting lots of training data in the form of young people providing little transcripts of all of their thoughts and all of their concerns to train a bigger and more powerful model. But this is happening again across the AI landscape with all of these companies. And they're doing it because there's this much bigger game.
What character.ai is doing in that case is getting lots of training data in the form of young people providing little transcripts of all of their thoughts and all of their concerns to train a bigger and more powerful model. But this is happening again across the AI landscape with all of these companies. And they're doing it because there's this much bigger game.
You noted in your intro that Character.ai was sort of kicked out of Google because this project was originally formulated inside of Google, thought to be too risky, too much brand risk. And so it was done as sort of a separate project, but then it got acquired back into Google. And you can see why it has so much risk.
You noted in your intro that Character.ai was sort of kicked out of Google because this project was originally formulated inside of Google, thought to be too risky, too much brand risk. And so it was done as sort of a separate project, but then it got acquired back into Google. And you can see why it has so much risk.
You noted in your intro that Character.ai was sort of kicked out of Google because this project was originally formulated inside of Google, thought to be too risky, too much brand risk. And so it was done as sort of a separate project, but then it got acquired back into Google. And you can see why it has so much risk.
And the reason why Google and other companies want to do things like this is they want to gather, again, more training data to win this race to AGI in order to beat China. But this is where I think we have to get really careful about what does it mean for the United States to beat China to AI.
And the reason why Google and other companies want to do things like this is they want to gather, again, more training data to win this race to AGI in order to beat China. But this is where I think we have to get really careful about what does it mean for the United States to beat China to AI.
And the reason why Google and other companies want to do things like this is they want to gather, again, more training data to win this race to AGI in order to beat China. But this is where I think we have to get really careful about what does it mean for the United States to beat China to AI.
If we release chatbots that then cause our minors to have psychological problems, do self-cutting, self-harm, suicide, and then actively harm their parents and harm the family system, Are we beating China in the long run? It's not a race for who has the most powerful AI to then shoot themselves in the foot with. It's a race for who is better at governing this new technology
If we release chatbots that then cause our minors to have psychological problems, do self-cutting, self-harm, suicide, and then actively harm their parents and harm the family system, Are we beating China in the long run? It's not a race for who has the most powerful AI to then shoot themselves in the foot with. It's a race for who is better at governing this new technology
If we release chatbots that then cause our minors to have psychological problems, do self-cutting, self-harm, suicide, and then actively harm their parents and harm the family system, Are we beating China in the long run? It's not a race for who has the most powerful AI to then shoot themselves in the foot with. It's a race for who is better at governing this new technology
better than the other countries are in such a way that it strengthens every aspect of your society, strengthens kids' development, strengthens your long-term economic future rather than undermines it. So we have to figure out how do we do AI in a way that actually strengthens the full stack sort of strength of our society. And that's what this conversation is really the tip of the iceberg about.
better than the other countries are in such a way that it strengthens every aspect of your society, strengthens kids' development, strengthens your long-term economic future rather than undermines it. So we have to figure out how do we do AI in a way that actually strengthens the full stack sort of strength of our society. And that's what this conversation is really the tip of the iceberg about.
better than the other countries are in such a way that it strengthens every aspect of your society, strengthens kids' development, strengthens your long-term economic future rather than undermines it. So we have to figure out how do we do AI in a way that actually strengthens the full stack sort of strength of our society. And that's what this conversation is really the tip of the iceberg about.
Well, Trump has hired, or not hired, he's brought in David Sachs to be the AI and crypto czar. And there are many AI experts that are being brought in now to the next Trump administration. That's David Sachs. that we get as smart about governing AI as, we like to say, it's like, we're not for AI or against AI, we're for steering AI.
Well, Trump has hired, or not hired, he's brought in David Sachs to be the AI and crypto czar. And there are many AI experts that are being brought in now to the next Trump administration. That's David Sachs. that we get as smart about governing AI as, we like to say, it's like, we're not for AI or against AI, we're for steering AI.
Well, Trump has hired, or not hired, he's brought in David Sachs to be the AI and crypto czar. And there are many AI experts that are being brought in now to the next Trump administration. That's David Sachs. that we get as smart about governing AI as, we like to say, it's like, we're not for AI or against AI, we're for steering AI.
And when you think of steering AI, I think of that image of Elon steering this rocket coming down from space, which is like using AI itself to help steer really precisely how to land this rocket between the two chopsticks. And I feel like that's what we need to do with AI metaphorically. We need things like, there's some common sense things we can do like liability.
And when you think of steering AI, I think of that image of Elon steering this rocket coming down from space, which is like using AI itself to help steer really precisely how to land this rocket between the two chopsticks. And I feel like that's what we need to do with AI metaphorically. We need things like, there's some common sense things we can do like liability.
And when you think of steering AI, I think of that image of Elon steering this rocket coming down from space, which is like using AI itself to help steer really precisely how to land this rocket between the two chopsticks. And I feel like that's what we need to do with AI metaphorically. We need things like, there's some common sense things we can do like liability.
If companies were liable for the harms their AI models created, they would be much more careful about releasing those models rather than I have to race to release it and capture the kid's market share because if I don't, I'll lose to the other company that will. And so if you have some basic common sense protections like liability, that'll go a long way. We can also have things like-
If companies were liable for the harms their AI models created, they would be much more careful about releasing those models rather than I have to race to release it and capture the kid's market share because if I don't, I'll lose to the other company that will. And so if you have some basic common sense protections like liability, that'll go a long way. We can also have things like-
If companies were liable for the harms their AI models created, they would be much more careful about releasing those models rather than I have to race to release it and capture the kid's market share because if I don't, I'll lose to the other company that will. And so if you have some basic common sense protections like liability, that'll go a long way. We can also have things like-
That's a good question. I mean, I think it's up to them right now to figure out a strategy to do that. But in the long run, you would really want that to be something that is on the device, right? That Apple and Google, as kind of making the device, should have some way of knowing that someone is an underage user or not.
That's a good question. I mean, I think it's up to them right now to figure out a strategy to do that. But in the long run, you would really want that to be something that is on the device, right? That Apple and Google, as kind of making the device, should have some way of knowing that someone is an underage user or not.
That's a good question. I mean, I think it's up to them right now to figure out a strategy to do that. But in the long run, you would really want that to be something that is on the device, right? That Apple and Google, as kind of making the device, should have some way of knowing that someone is an underage user or not.
And the problem is that people don't want to touch these issues because they're so sensitive. And so they'll only do something like that once they're really forced to through lawsuits, litigation, legislation that kind of puts it on them. Right now, each company, TikTok, Instagram, Snapchat, are doing their own different approaches. And we really should have a unified approach.
And the problem is that people don't want to touch these issues because they're so sensitive. And so they'll only do something like that once they're really forced to through lawsuits, litigation, legislation that kind of puts it on them. Right now, each company, TikTok, Instagram, Snapchat, are doing their own different approaches. And we really should have a unified approach.
And the problem is that people don't want to touch these issues because they're so sensitive. And so they'll only do something like that once they're really forced to through lawsuits, litigation, legislation that kind of puts it on them. Right now, each company, TikTok, Instagram, Snapchat, are doing their own different approaches. And we really should have a unified approach.
Yeah, that's a great question, Megan. I think it's great that the Australian government is taking this step and taking a strong stand on protecting kids online and responding to parents that are fed up with this. I'm a big friend and fan of Jonathan Haidt and his new book, The Anxious Generation, which really outlined over the last decade and a half.
Yeah, that's a great question, Megan. I think it's great that the Australian government is taking this step and taking a strong stand on protecting kids online and responding to parents that are fed up with this. I'm a big friend and fan of Jonathan Haidt and his new book, The Anxious Generation, which really outlined over the last decade and a half.
Yeah, that's a great question, Megan. I think it's great that the Australian government is taking this step and taking a strong stand on protecting kids online and responding to parents that are fed up with this. I'm a big friend and fan of Jonathan Haidt and his new book, The Anxious Generation, which really outlined over the last decade and a half.
how we got here and how with this business model of maximizing attention and engagement, it produced a generation of more addicted, distracted, you know, sexualized, harassed children that have more anxiety, more depression rates than ever before. And while we all, you know, parents do have a responsibility to, you know, be aware of what their children are doing online.
how we got here and how with this business model of maximizing attention and engagement, it produced a generation of more addicted, distracted, you know, sexualized, harassed children that have more anxiety, more depression rates than ever before. And while we all, you know, parents do have a responsibility to, you know, be aware of what their children are doing online.
how we got here and how with this business model of maximizing attention and engagement, it produced a generation of more addicted, distracted, you know, sexualized, harassed children that have more anxiety, more depression rates than ever before. And while we all, you know, parents do have a responsibility to, you know, be aware of what their children are doing online.
One of the things we talk about in our work though, is that the number of things to be aware of is going up sort of like exponentially. And, you know, The number of new apps are going up exponentially, and parents can't be aware of all at the same time.
One of the things we talk about in our work though, is that the number of things to be aware of is going up sort of like exponentially. And, you know, The number of new apps are going up exponentially, and parents can't be aware of all at the same time.
One of the things we talk about in our work though, is that the number of things to be aware of is going up sort of like exponentially. And, you know, The number of new apps are going up exponentially, and parents can't be aware of all at the same time.
In the case of Sewell Setzer, the young 14-year-old who took his life, his mother knew to be looking out for what he was using in terms of social media, but did not know about these new AI chatbots. And there's so many of them that are constantly coming on the market. And so ironically, I think the social media ban in Australia would not cover so far the character.ai companion AIs.
In the case of Sewell Setzer, the young 14-year-old who took his life, his mother knew to be looking out for what he was using in terms of social media, but did not know about these new AI chatbots. And there's so many of them that are constantly coming on the market. And so ironically, I think the social media ban in Australia would not cover so far the character.ai companion AIs.
In the case of Sewell Setzer, the young 14-year-old who took his life, his mother knew to be looking out for what he was using in terms of social media, but did not know about these new AI chatbots. And there's so many of them that are constantly coming on the market. And so ironically, I think the social media ban in Australia would not cover so far the character.ai companion AIs.
And I think that speaks to the issue of technology moving faster than governance. We have to live in a world where our culture and our appraisal of technology issues is moving as fast as the technology is. But I will say that that channel of a child and their brain and their psychological environment
And I think that speaks to the issue of technology moving faster than governance. We have to live in a world where our culture and our appraisal of technology issues is moving as fast as the technology is. But I will say that that channel of a child and their brain and their psychological environment
And I think that speaks to the issue of technology moving faster than governance. We have to live in a world where our culture and our appraisal of technology issues is moving as fast as the technology is. But I will say that that channel of a child and their brain and their psychological environment
AI is going to produce a flood of new threats into that environment from notification apps that are already starting to hit schools, of kids making non-consensual imagery of other classmates, to new forms of harassment, to these new chatbots.
AI is going to produce a flood of new threats into that environment from notification apps that are already starting to hit schools, of kids making non-consensual imagery of other classmates, to new forms of harassment, to these new chatbots.
AI is going to produce a flood of new threats into that environment from notification apps that are already starting to hit schools, of kids making non-consensual imagery of other classmates, to new forms of harassment, to these new chatbots.
And so I think while this channel is basically about to get flooded, saying we need to kind of put strict limits on that channel before we figure out what's really safe feels like a wise decision, given that the incentives are not aligned with strengthening children's development as we roll out technology. Not yet.
And so I think while this channel is basically about to get flooded, saying we need to kind of put strict limits on that channel before we figure out what's really safe feels like a wise decision, given that the incentives are not aligned with strengthening children's development as we roll out technology. Not yet.
And so I think while this channel is basically about to get flooded, saying we need to kind of put strict limits on that channel before we figure out what's really safe feels like a wise decision, given that the incentives are not aligned with strengthening children's development as we roll out technology. Not yet.
No, no. And in Jonathan Haidt's book, The Anxious Generation specifically, the issues of self-harm and suicide and depression and all of this stuff, harassment, have been particularly harder on young girls compared to young boys. So it's not surprising to me at all, unfortunately. And we can't know what this case is too early to tell, given that we don't know their usage.
No, no. And in Jonathan Haidt's book, The Anxious Generation specifically, the issues of self-harm and suicide and depression and all of this stuff, harassment, have been particularly harder on young girls compared to young boys. So it's not surprising to me at all, unfortunately. And we can't know what this case is too early to tell, given that we don't know their usage.
No, no. And in Jonathan Haidt's book, The Anxious Generation specifically, the issues of self-harm and suicide and depression and all of this stuff, harassment, have been particularly harder on young girls compared to young boys. So it's not surprising to me at all, unfortunately. And we can't know what this case is too early to tell, given that we don't know their usage.
But we do know that, again, we've run this experiment on children for the last 15 years. We've also handed our number one geopolitical competitor, China and the Chinese Communist Party, basically control over our youth psychological environment in the form of TikTok being the dominant thing that young people are looking at every day.
But we do know that, again, we've run this experiment on children for the last 15 years. We've also handed our number one geopolitical competitor, China and the Chinese Communist Party, basically control over our youth psychological environment in the form of TikTok being the dominant thing that young people are looking at every day.
But we do know that, again, we've run this experiment on children for the last 15 years. We've also handed our number one geopolitical competitor, China and the Chinese Communist Party, basically control over our youth psychological environment in the form of TikTok being the dominant thing that young people are looking at every day.
And if I'm the Chinese Communist Party and I have an ability to go in and sort of steer TikTok and tilt the playing field of what gets recommended, I not only have the ability to steer what people are seeing, I have a 24-7, up-to-the-minute update view of all of the cultural fault lines and divisive issues per political tribe in that country.
And if I'm the Chinese Communist Party and I have an ability to go in and sort of steer TikTok and tilt the playing field of what gets recommended, I not only have the ability to steer what people are seeing, I have a 24-7, up-to-the-minute update view of all of the cultural fault lines and divisive issues per political tribe in that country.
And if I'm the Chinese Communist Party and I have an ability to go in and sort of steer TikTok and tilt the playing field of what gets recommended, I not only have the ability to steer what people are seeing, I have a 24-7, up-to-the-minute update view of all of the cultural fault lines and divisive issues per political tribe in that country.
And I can do precision targeting of how I want your country's internal divisions to go because you've literally handed them to me on a silver platter. And this, I think, is one of the biggest and most obvious and avoidable mistakes that we could have made. And obviously, TikTok, there has been legislation move forward and that ban looks like it will going forward. I think TikTok is appealing.
And I can do precision targeting of how I want your country's internal divisions to go because you've literally handed them to me on a silver platter. And this, I think, is one of the biggest and most obvious and avoidable mistakes that we could have made. And obviously, TikTok, there has been legislation move forward and that ban looks like it will going forward. I think TikTok is appealing.
And I can do precision targeting of how I want your country's internal divisions to go because you've literally handed them to me on a silver platter. And this, I think, is one of the biggest and most obvious and avoidable mistakes that we could have made. And obviously, TikTok, there has been legislation move forward and that ban looks like it will going forward. I think TikTok is appealing.
And it's not just about TikTok, though. It's just about the systemic environment. On the one hand, you have our apps that are racing to addict and doom scroll our kids and drive anxiety. That's one set of problems. And then we also have the problem of letting our geopolitical competitor control the psychological environment of not just our young people, but our country.
And it's not just about TikTok, though. It's just about the systemic environment. On the one hand, you have our apps that are racing to addict and doom scroll our kids and drive anxiety. That's one set of problems. And then we also have the problem of letting our geopolitical competitor control the psychological environment of not just our young people, but our country.
And it's not just about TikTok, though. It's just about the systemic environment. On the one hand, you have our apps that are racing to addict and doom scroll our kids and drive anxiety. That's one set of problems. And then we also have the problem of letting our geopolitical competitor control the psychological environment of not just our young people, but our country.
And I think people should sort of see how obvious an issue this is and say, we need to move forward and not let this continue. And I hope that that happens in the next administration.
And I think people should sort of see how obvious an issue this is and say, we need to move forward and not let this continue. And I hope that that happens in the next administration.
And I think people should sort of see how obvious an issue this is and say, we need to move forward and not let this continue. And I hope that that happens in the next administration.
And that's just horrible to hear. Murr is always wrong and we should not be using violence, vigilante violence to solve social problems. But it's also not surprising.
And that's just horrible to hear. Murr is always wrong and we should not be using violence, vigilante violence to solve social problems. But it's also not surprising.
And that's just horrible to hear. Murr is always wrong and we should not be using violence, vigilante violence to solve social problems. But it's also not surprising.
We have, again, a psychological environment of social media that is designed for maximizing engagement, which is designed to find every radicalizing cultural issue and then give you an infinite evidence of why it's getting worse and more extreme and why you should take extreme action for everything that you click on.
We have, again, a psychological environment of social media that is designed for maximizing engagement, which is designed to find every radicalizing cultural issue and then give you an infinite evidence of why it's getting worse and more extreme and why you should take extreme action for everything that you click on.
We have, again, a psychological environment of social media that is designed for maximizing engagement, which is designed to find every radicalizing cultural issue and then give you an infinite evidence of why it's getting worse and more extreme and why you should take extreme action for everything that you click on.
know it's like you know whatever your bogeyman is that activates your nervous system i just show you infinite evidence of that bogeyman happening and then it drives up this sort of psychological you know funhouse mirror that we're all living in we've been living in that for 15 years so if you just imagine society going through the washing machine you know getting spun out for 15 years in that environment it's not surprising that we have people more radicalized on more issues everywhere
know it's like you know whatever your bogeyman is that activates your nervous system i just show you infinite evidence of that bogeyman happening and then it drives up this sort of psychological you know funhouse mirror that we're all living in we've been living in that for 15 years so if you just imagine society going through the washing machine you know getting spun out for 15 years in that environment it's not surprising that we have people more radicalized on more issues everywhere
know it's like you know whatever your bogeyman is that activates your nervous system i just show you infinite evidence of that bogeyman happening and then it drives up this sort of psychological you know funhouse mirror that we're all living in we've been living in that for 15 years so if you just imagine society going through the washing machine you know getting spun out for 15 years in that environment it's not surprising that we have people more radicalized on more issues everywhere
And the point is, this doesn't have to be this way. Imagine if we went back to 2010 and we said, before we go down this decade and a half of maximizing for attention and engagement, imagine we never did that. Imagine somehow we put strict limits on maximizing engagement and said, instead, you got to show us something else you're maximizing.
And the point is, this doesn't have to be this way. Imagine if we went back to 2010 and we said, before we go down this decade and a half of maximizing for attention and engagement, imagine we never did that. Imagine somehow we put strict limits on maximizing engagement and said, instead, you got to show us something else you're maximizing.
And the point is, this doesn't have to be this way. Imagine if we went back to 2010 and we said, before we go down this decade and a half of maximizing for attention and engagement, imagine we never did that. Imagine somehow we put strict limits on maximizing engagement and said, instead, you got to show us something else you're maximizing.
For kids apps, you got to be showing transparently, just like Elon showing what the algorithm of Twitter does. We have to transparently show what are you doing to make children's psychological environment better, but you can't maximize for engagement. And imagine we did something totally different.
For kids apps, you got to be showing transparently, just like Elon showing what the algorithm of Twitter does. We have to transparently show what are you doing to make children's psychological environment better, but you can't maximize for engagement. And imagine we did something totally different.
For kids apps, you got to be showing transparently, just like Elon showing what the algorithm of Twitter does. We have to transparently show what are you doing to make children's psychological environment better, but you can't maximize for engagement. And imagine we did something totally different.
How different would our world feel if we had not been personalizing these boogeyman psychological stimuli for the last 15 years? And I think it would feel very different and we could still do that. It's very entrenched with social media now, but that's not too late to change it. We just need to have the fortitude to do it.
How different would our world feel if we had not been personalizing these boogeyman psychological stimuli for the last 15 years? And I think it would feel very different and we could still do that. It's very entrenched with social media now, but that's not too late to change it. We just need to have the fortitude to do it.
How different would our world feel if we had not been personalizing these boogeyman psychological stimuli for the last 15 years? And I think it would feel very different and we could still do that. It's very entrenched with social media now, but that's not too late to change it. We just need to have the fortitude to do it.
Yeah, for parents, it's first just to say I really empathize. It's a hard world out there, but there are great resources available. The Anxious Generation, John Haidt's website has a bunch of really great up-to-date resources for parents. There's a great group that we also helped get started called Moms Against Media Addiction or MAMA. And parents can join that group.
Yeah, for parents, it's first just to say I really empathize. It's a hard world out there, but there are great resources available. The Anxious Generation, John Haidt's website has a bunch of really great up-to-date resources for parents. There's a great group that we also helped get started called Moms Against Media Addiction or MAMA. And parents can join that group.
Yeah, for parents, it's first just to say I really empathize. It's a hard world out there, but there are great resources available. The Anxious Generation, John Haidt's website has a bunch of really great up-to-date resources for parents. There's a great group that we also helped get started called Moms Against Media Addiction or MAMA. And parents can join that group.
And they advocate for actually changes in different states to state laws to help protect, you know, better design policies for social media in different states. And, you know, we have some resources on our website, humainetech.com. You know, everybody who saw the social dilemma, we have resources for educators, for parents, just educating people about the nature.
And they advocate for actually changes in different states to state laws to help protect, you know, better design policies for social media in different states. And, you know, we have some resources on our website, humainetech.com. You know, everybody who saw the social dilemma, we have resources for educators, for parents, just educating people about the nature.
And they advocate for actually changes in different states to state laws to help protect, you know, better design policies for social media in different states. And, you know, we have some resources on our website, humainetech.com. You know, everybody who saw the social dilemma, we have resources for educators, for parents, just educating people about the nature.
Because, you know, the example you gave of NFL scores, while it's addictive, it's You don't have a thousand engineers behind the glass screen who every day tweak the design with AI to perfectly maximize and keep your kid doom scrolling the NFL scores. But you do have that with social media and you do have that with character.ai.
Because, you know, the example you gave of NFL scores, while it's addictive, it's You don't have a thousand engineers behind the glass screen who every day tweak the design with AI to perfectly maximize and keep your kid doom scrolling the NFL scores. But you do have that with social media and you do have that with character.ai.
Because, you know, the example you gave of NFL scores, while it's addictive, it's You don't have a thousand engineers behind the glass screen who every day tweak the design with AI to perfectly maximize and keep your kid doom scrolling the NFL scores. But you do have that with social media and you do have that with character.ai.
So there is a distinction and that's the kind of stuff that I think we need more parents knowing about spreading, starting school groups, starting moms against media chapters in your own state. There is change that's possible, but I think parents do have to get organized.
So there is a distinction and that's the kind of stuff that I think we need more parents knowing about spreading, starting school groups, starting moms against media chapters in your own state. There is change that's possible, but I think parents do have to get organized.
So there is a distinction and that's the kind of stuff that I think we need more parents knowing about spreading, starting school groups, starting moms against media chapters in your own state. There is change that's possible, but I think parents do have to get organized.
Right. Well, an AI that's driving those recommendations, right? It's a big AI that's gathering all this data to figure out what do people click on. That shows that there's a reflection of how anxious society is. And I think it's just evidence of all the things that John Haidt wrote about in The Anxious Generation, unfortunately. But I think we don't have to live in this world.
Right. Well, an AI that's driving those recommendations, right? It's a big AI that's gathering all this data to figure out what do people click on. That shows that there's a reflection of how anxious society is. And I think it's just evidence of all the things that John Haidt wrote about in The Anxious Generation, unfortunately. But I think we don't have to live in this world.
Right. Well, an AI that's driving those recommendations, right? It's a big AI that's gathering all this data to figure out what do people click on. That shows that there's a reflection of how anxious society is. And I think it's just evidence of all the things that John Haidt wrote about in The Anxious Generation, unfortunately. But I think we don't have to live in this world.
I do think that there's a better psychological environment and healthier families that we can have. We just need to change the incentives. Charlie Munger, Warren Buffett's business partner said, If you want to change the outcome, you have to change the incentives. And that's what we have still to do with social media and AI.
I do think that there's a better psychological environment and healthier families that we can have. We just need to change the incentives. Charlie Munger, Warren Buffett's business partner said, If you want to change the outcome, you have to change the incentives. And that's what we have still to do with social media and AI.
I do think that there's a better psychological environment and healthier families that we can have. We just need to change the incentives. Charlie Munger, Warren Buffett's business partner said, If you want to change the outcome, you have to change the incentives. And that's what we have still to do with social media and AI.
Thank you, Megan. Thank you for amplifying the story and helping people understand it. Thank you very much.
Thank you, Megan. Thank you for amplifying the story and helping people understand it. Thank you very much.
Thank you, Megan. Thank you for amplifying the story and helping people understand it. Thank you very much.