Nilay Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
Hello and welcome to Decoder.
I'm Neil I. Patel, editor-in-chief of The Verge, and Decoder is my show about big ideas and other problems.
Today, we're talking about the landmark social media addiction trials that just resulted in two major verdicts against Meta and Google.
There's one case in New Mexico against Meta, and another in California against both companies, who both have said they plan to appeal.
These are complicated cases with some huge repercussions for how these platforms work and the very nature of speech in America.
So to help us work through it all, I've brought on two heavy hitters, my friend Casey Newton, the founder and editor of the excellent newsletter Platformer and co-host of the Hard Fork podcast, as well as Verge senior policy reporter Lauren Feiner, who's actually in that Los Angeles courtroom where executives like Mark Zuckerberg took the stand in the case of a 20-year-old woman named Kayleigh, who successfully argued that Meta and Google negligently designed their platforms in ways that contributed to her mental health issues.
These cases, the first in a wave of injury lawsuits targeting tech companies, are about the design decisions of platforms like Instagram and YouTube.
They argue that the platforms have fundamental design flaws that harm users, especially teenagers, and that these companies knew about these problems and were negligent in shipping these features anyway.
These cases are part of a much larger set of moves that aim to fundamentally change the legal mechanisms that exist that might regulate social media.
Now, harm in the context of these cases isn't just addictive design that brings users back compulsively.
It's also features like algorithmic recommendations and camera filters that make issues like anxiety, depression, and body dysmorphia worse.
This emphasis on how the platforms work, as opposed to the content, is part of a movement that has been building for years, focused on the argument that social media is not and cannot be healthy.
That, in fact, these products might be defective, the same way that cigarettes, when used as designed, cause cancer.
That's a lot of complicated ideas, and Casey and Lauren and I really spent some time working through them.
The first complicated idea is whether or not there is a distinction between product features like recommendation, autoplay video, and inference scroll, and the types of harmful yet legal speech served people on these platforms using those tools, like eating disorder videos, or posts designed to convince young men to hate women.
But it's very difficult, if not unconstitutional, to force the companies to moderate this kind of content in specific ways.
The First Amendment obviously prohibits the government from regulating what speech these companies promote or moderate, and private action is usually blocked by Section 230 of the Communications Decency Act, which protects tech platforms from being held responsible for the content their users post.
So it's really hard to pull all these ideas apart.
An algorithmic feed with no content in it simply isn't a compelling product, let alone a negligently defective one that causes harm.
A lot of smart people who have been on this show and all over The Verge over the past few years have said that these rulings are just an end run around 230 in the First Amendment, a way to make platforms liable for what ultimately is just speech.