Menu
Sign In Pricing Add Podcast
Podcast Image

Decoder with Nilay Patel

The AI election deepfakes have arrived

Thu, 29 Aug 2024

Description

Decoder is off this week for a short end-of-summer break. We’ll be back with both our interview and explainer episodes after the Labor Day holiday. In the meantime we thought we’d re-share an explainer that’s taken on a whole new relevance in the last couple weeks, about deepfakes and misinformation. In February, I talked with Verge policy editor Adi Robertson how the generative AI boom might start fueling a wave of election-related misinformation, especially deepfakes and manipulated media. It’s not been quite an apocalyptic AI free-for-all out there. But the election itself took some really unexpected turns in these last couple of months. Now we’re heading into the big, noisy home stretch, and use of AI is starting to get really weird — and much more troublesome.  Links:  The AI-generated hell of the 2024 election | The Verge AI deepfakes are cheap, easy, and coming for the 2024 election | Decoder Elon Musk posts deepfake of Kamala Harris that violates X policy | The Verge Donald Trump posts a fake AI-generated Taylor Swift endorsement | The Verge X’s Grok now points to government site after misinformation warnings | The Verge Political ads could require AI-generated content disclosures soon | The Verge The Copyright Office calls for a new federal law regulating deepfakes | The Verge How AI companies are reckoning with elections | The Verge The lame AI meme election | Axios Deepfakes' parody loophole | Axios Credits:  Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Callie Wright. Our supervising producer is Liam James. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Audio
Transcription

0.723 - 15.609 Citi

Amgen, a leading biotechnology company, needed a global financial company to facilitate funding and acquisition to broaden Amgen's therapeutic reach, expand its pipeline, and accelerate bringing new and innovative medicines to patients in need globally.

0
💬 0

16.25 - 39.434 Citi

They found that partner in Citi, whose seamlessly connected banking, markets, and services businesses can advise, finance, and close deals around the world. Learn more at citi.com slash client stories. Creating highly advanced AI is complicated, especially if you don't have the right storage, a critical but often overlooked catalyst for AI infrastructures.

0
💬 0

40.055 - 61.391 Citi

Solidigm is storage optimized for the AI era. Offering bigger, faster, and more energy efficient solid state storage, Solidigm delivers the capability to meet capacity, performance, and energy demands across your AI data workloads. AI requires a different approach to storage. Solidigm is ready for everything the AI era demands. Learn more at storageforai.com. Thank you.

0
💬 0

63.649 - 75.237 Neil I. Patel

Hello and welcome to Decoder. I'm Neil Apatow, Editor-in-Chief of The Verge, and Decoder is my show about big ideas and other problems. We're on a short summer break right now. We'll be back after Labor Day with new interview and explainer episodes, and pretty excited about what's on the schedule.

0
💬 0

75.557 - 93.369 Neil I. Patel

In the meantime, we thought we'd reshare an explainer that's taken on a whole new relevance these last couple weeks. It's about deepfakes and misinformation. In February, I talked with Verge Policy Editor Addie Robertson about how the generative AI boom might start fueling a wave of election-related misinformation, especially AI-generated deepfakes and manipulated media.

0
💬 0

94.009 - 108.558 Neil I. Patel

At the time, the biggest news in AI fakes was a robocall with an AI version of Joe Biden's voice. It's been about six months, and while there hasn't been quite an apocalyptic AI free-for-all out there, the election itself took some pretty unexpected turns.

0
💬 0

109.258 - 124.146 Neil I. Patel

Now we're headed into the big, noisy homestretch before Election Day, and the use of AI is starting to get really weird and much more troublesome. Elon Musk's X has become the de facto platform for AI-generated misinformation, and Trump's campaign has also started to boost its own AI use.

0
💬 0

124.526 - 142.883 Neil I. Patel

For the most part, these AI stunts have been mostly for cheap laughs, unless Taylor Swift decides to sue the Trump campaign. But as you'll hear Addy and I talk about in this episode, there are not a lot of easy avenues to regulate this kind of media without running headlong into the First Amendment, especially when dealing with political commentary around public figures.

0
💬 0

143.524 - 175.459 Neil I. Patel

There's a lot going on here and a lot of very difficult problems to solve that haven't really changed since we last talked about it. Okay, AI deepfakes during the 2024 election. Here we go. Addie Robertson, how are you doing? Hi, good. You've been tracking this conversation for a very long time. It does seem like there's more nuance in the disinformation conversation than before.

0
💬 0

175.539 - 183.765 Neil I. Patel

It's not just Russia made people elect Trump, which is I think where we were in 2016. Can you just give a background? What's the shape of how people are thinking of disinformation right now?

0
💬 0

184.508 - 202.294 Adi Robertson

We've had, I think, about three major in the U.S. presidential election cycles where disinformation was a huge issue. 2016, where there was a lot of discussion in the aftermath about, all right, was there foreign meddling in the election? Were people being influenced by these coordinated campaigns?

0
💬 0

203.234 - 229.539 Adi Robertson

There was 2020 where deepfakes technically did exist, but generative AI tools were just not as sophisticated. They were not as easy to use. They were not nearly as prevalent. And so there was a huge conversation about what role do social platforms play in preventing general manipulated information. And there was, in a lot of ways, a huge crack down, there was the entire issue of Stop the Steal.

0
💬 0

229.939 - 252.232 Adi Robertson

There are these large movements that are trying to just lie about who won the election. What do we do? There were questions about, all right, do we kick Trump off social networks? These were the locus of debate. And now it's 2024, and we have in some ways I think a little bit of a hangover from 2020 where platforms are really tired of policing this.

0
💬 0

252.813 - 276.462 Adi Robertson

And so they're dealing with, all right, how do we renegotiate this for the 2024 election? And then you have this whole other layer of generative AI imagery, whether or not you want to technically call it deepfakes is like an open question. And then there are all the layers of how that gets disseminated and whether that turbo charges a bunch of issues that already existed.

0
💬 0

277.783 - 301.168 Neil I. Patel

So the platforms are getting tired of this is worth talking about for one second longer. There was a huge rush of how do we make ultra sophisticated content moderation systems? And I think the pinnacle of that rush was Facebook setting up its oversight board, which is effectively a Supreme court of content moderation decisions. And that was seen as, okay, Facebook is as big as a state.

0
💬 0

301.288 - 323.981 Neil I. Patel

It has the revenue of a state. It's a government now. It's going to have some government-like functions to regulate speech on its platforms. That didn't pan out, right? The oversight board exists. It moves very slowly. It's, I think, hard for the average Facebook user or average Instagram user to think there's a moderating force involved. on content moderation on this platform.

0
💬 0

324.481 - 327.564 Neil I. Patel

It's the same as it ever was for the user perspective, as far as I can tell.

0
💬 0

328.104 - 348.822 Adi Robertson

Yeah, I think that the oversight board, what it tends to do is that is maybe comparable to the Supreme Court is do sophisticated outside thinking about what does a consistent moderation framework look like. But like the Supreme Court in real life does not adjudicate every single complaint that you have. You have a whole bunch of other courts. Facebook doesn't have really those other courts.

0
💬 0

348.882 - 374.003 Adi Robertson

Facebook has a gigantic army of moderators who don't always necessarily even see its policies. So, yeah, it's this very macro level. We're going to do the big thinking. But also, even at the time, there was the question of, is this really just Facebook or now Meta kind of outsourcing and kicking the can out of its court and putting the hard questions on other people?

0
💬 0

375.034 - 395.688 Neil I. Patel

I wanted to bring that up specifically because that was the pinnacle, I think, of the big thinks about content moderation. Since that time, the companies have all done lots of layoffs. We've seen trust and safety diminished across the board. I think most famously with Twitter, now X, Elon Musk basically decimated the trust and safety team on that platform.

0
💬 0

396.448 - 408.717 Neil I. Patel

It appears Lindy Iaccarino is trying to bring some of them back. But the idea that content moderation is the thing these platforms have to do is no longer in vogue, I think, the way it was when the Oversight Board was created.

0
💬 0

409.637 - 432.391 Adi Robertson

Yeah. And part of this is also political, that there was a huge, largely, again, in the U.S., right-wing backlash to this, that this was the kind of thing that would get a state attorney general mad at you and get a congressional committee to investigate you. as it ended up doing with pre-Musk Twitter. I think that, yeah, there became a real political price for doing this as well.

0
💬 0

433.091 - 445.974 Adi Robertson

Since then, some platforms have let Donald Trump back on. They've said, all right, but we cannot possibly moderate every single lie on this. We're going to just wash our hands of whether you're saying the election was stolen or not.

0
💬 0

446.792 - 465.856 Neil I. Patel

So yeah, let's go through the new players and how they might turbocharge the disinformation conversation now. And then let's talk about what might be done about it. I do just wanna emphasize for the audience, it doesn't seem like the desire to regulate information on social networks is nearly as high as it has been in the past.

0
💬 0

466.316 - 486.849 Neil I. Patel

And I think that is an important thing to start with because the technical challenges are so hard that wanting to solve them is actually an important component of the puzzle. But let's talk about the actual technical challenges and the players behind them. OpenAI, that's a new company. There are a lot of other new companies in various stages of controversy.

0
💬 0

487.349 - 507.321 Neil I. Patel

So Midjourney exists, that is an image generator. Stability AI exists, another image generator. They're getting sued by Getty for using the Getty library, allegedly, to train images that look like Getty photos. in this context, very important. MidiJourney is getting sued as well. OpenAI is getting sued for training on the New York Times database.

0
💬 0

508.042 - 529.613 Neil I. Patel

Just a few days ago, OpenAI announced Sora, its text-to-video generator, which frankly makes terrifying videos. All those videos look terrifying. But you can see how... A enterprising scammer could immediately use that to make something that looks like compelling video, something that didn't happen. All of these companies talk about AI alignment, making sure AI doesn't go off the rails.

0
💬 0

530.314 - 537.679 Neil I. Patel

Where's the AI industry broadly on we shouldn't do political deepfakes? Do they have a unified point of view or are they all in different spots? How's that working out?

0
💬 0

538.151 - 562.014 Adi Robertson

The companies are in slightly different spots, but they actually have come together. Very recently, they've signed an accord that says, look, we're going to take this seriously. They've announced policies that are varying levels of strictness, but tend toward, you If you're a major AI company, you're going to try to prevent people from creating information that maybe looks bad for public figures.

0
💬 0

562.334 - 577.078 Adi Robertson

Maybe you ban producing images of recognizable figures altogether or you try to. And you have something in your terms of service that says if you're using this for political causes or if you're creating deceptive content, then we can kick you off.

0
💬 0

578.135 - 598.689 Neil I. Patel

One challenge here in America is the existence of the First Amendment. The Biden administration recently did an executive order saying, don't do bad stuff. And these companies all agreed, OK, we won't do bad stuff. But the United States government is pretty restricted in saying you can't make deep fakes of other people because the First Amendment exists and it can't control that speech directly.

0
💬 0

600.11 - 605.454 Neil I. Patel

Are the companies rising to that challenge and saying we will self-regulate because the government can't directly regulate us?

0
💬 0

606.267 - 630.519 Adi Robertson

We don't know necessarily how good the enforcement of it is going to be, but the companies seem so far pretty open to the idea of self-regulation, in part because I think this isn't just a civic-minded political thing. Dealing with unflattering stuff about real people is just a minefield they don't want. That said, there are also just there are open source tools.

0
💬 0

630.619 - 649.228 Adi Robertson

Stability AI is pretty close to open source. It's pretty easy to go in and make a thing that builds on it that maybe strips away the safeguards you get in its public version. So it's just not quite equivalent to the way that, say, social platforms are able to completely control what's on their platforms.

0
💬 0

650.506 - 671.221 Neil I. Patel

So you've got a handful of companies with varying sets of restrictions, a broad general industry consensus. We shouldn't do deep fakes. And then you have reality, which is that there are deep fakes of celebrities all the time. There are deep fakes of teenage girls in high schools that are getting circulated on private message boards. It is happening. What can be done to stop it?

0
💬 0

671.909 - 690.561 Adi Robertson

Does stopping mean that you're just trying to limit the spread to where this doesn't become a huge viral thing that a bunch of people see, but it still may be technically possible to create this? Or do you want to say, all right, we have a zero tolerance policy. If anything is created with any tool anywhere, even if someone keeps it to themselves, that is unconscionable.

0
💬 0

691.181 - 715.489 Neil I. Patel

Let's start with the second one, which I think has the more obvious answer. Saying no deepfakes are allowed whatsoever seems like it comes with a host of unintended consequences about speech and also seems like impossible to actually accomplish. because of the existence of open source tools. Like I think, how would you actually enforce a total ban on deepfakes?

0
💬 0

716.25 - 736.114 Neil I. Patel

And the answer is that Intel and Apple and Qualcomm and Nvidia and AMD and every other chip maker have to prevent it somehow at the hardware level, which seems impossible. The only example I can think of where we have allowed that to happen is that Adobe Photoshop won't allow you to scan and print a dollar bill

0
💬 0

737.35 - 756.31 Neil I. Patel

which makes sense, like it broadly makes sense that Adobe made that deal with the government. But it's also like, well, that's about as far as you should let that go, right? Like there's a point where you wanna make a parody image of a Biden or a Trump, and you don't want Photoshop saying, hey, are you manipulating a real person's face? Like you're saying, that seems way too far.

0
💬 0

757.03 - 774.217 Neil I. Patel

So a total ban seems implausible. There are other things you could do at the creation step. OpenAI bans certain prompts that violates their terms of service. Getty won't let you talk about celebrities at all. If you type a celebrity's name or basically any proper noun into the Getty image generator, it just tells you to go away.

0
💬 0

774.817 - 786.441 Neil I. Patel

There's a lot of conversation about watermarking this stuff and making sure that real images have a watermark that say they're real images and AI images have a watermark that say they're AI images. Do any of those seem promising?

0
💬 0

787.131 - 807.847 Adi Robertson

The most promising argument I've heard for these is the idea that you can – and this is an argument that Adobe has made to me – train people to expect a watermark. And so if what you're saying is we want to make it impossible to make these images without a watermark, I think that raises the same problems that we just talked about, which if anyone can make –

0
💬 0

808.607 - 830.727 Adi Robertson

tweaked version of an open source tool, they can just say, don't put a watermark in. But I think that you could potentially get into a situation where you require a watermark. And if something doesn't have a watermark, there are ways that its design or its spread or people trusting it are severely hobbled. That's maybe the best argument for it, I've heard.

0
💬 0

830.747 - 848.935 Neil I. Patel

The part where you restrict the prompts. OpenAI restricts the prompts, Getty restricts the prompts. It's pretty easy to get around that, right? The Taylor Swift deep fakes that were floating around on Twitter, they were made in a Microsoft tool and Microsoft just had to get rid of the prompts. Is that just a forever cat and mouse game on the restrict the prompts idea?

0
💬 0

849.867 - 870.293 Adi Robertson

It does seem like the thing about a lot of generative AI tools is that there are just vast, vast numbers of ways to get them to do something. People are going to find those. Software bugs are a thing that has been a problem. Zero-day exploits have been a problem on computers for a very long time. And this feels like it kind of falls into that category.

0
💬 0

872.794 - 878.196 Neil I. Patel

That's the creation side. We need to take a quick break. When we come back, we'll get into the harder problem, distribution.

0
💬 0

882.748 - 905.902 Advertiser

Support for this podcast and the following message is brought to you by E-Trade from Morgan Stanley. Take control of your financial future with E-Trade. No matter what kind of investor you are, our tools and resources can help you be ready for what's next. Now when you open an account, you can get up to $1,000 with a qualifying deposit. Terms apply. Learn more at etrade.com slash vox.

0
💬 0

906.262 - 912.988 Advertiser

Investing involves risks. Morgan Stanley Smith, Barney LLC, member SIPC. E-Trade is a business of Morgan Stanley.

0
💬 0

914.485 - 938.419 Citi

They're not writers, but they help their clients shape their businesses' financial stories. They're not an airline, but their network connects global businesses in nearly 180 local markets. They're not detectives, but they work across businesses to uncover new financial opportunities for their clients. They're not just any bank. They are Citi. Learn more at Citi.com slash WeAreCiti.

0
💬 0

938.439 - 942.161 Citi

That's C-I-T-I dot com slash WeAreCiti.

0
💬 0

946.263 - 955.326 Narrator

Fox Creative. This is advertiser content from Zelle. When you picture an online scammer, what do you see?

0
💬 0

955.906 - 964.809 Ian Mitchell

For the longest time, we have these images of somebody sitting crouched over their computer with a hoodie on, just kind of typing away in the middle of the night. And honestly, that's not what it is anymore.

0
💬 0

965.569 - 979.294 Narrator

That's Ian Mitchell, a banker turned fraud fighter. These days, online scams look more like crime syndicates than individual con artists. And they're making bank. Last year, scammers made off with more than $10 billion.

0
💬 0

980.474 - 1000.045 Ian Mitchell

It's mind-blowing to see the kind of infrastructure that's been built to facilitate scamming at scale. There are hundreds, if not thousands, of scam centers all around the world. These are very savvy business people. These are organized criminal rings. And so once we understand the magnitude of this problem, we can protect people better.

0
💬 0

1001.974 - 1012.876 Narrator

One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them. But Ian says one of our best defenses is simple.

0
💬 0

1013.656 - 1031.884 Ian Mitchell

We need to talk to each other. We need to have those awkward conversations around what do you do if you have text messages you don't recognize? What do you do if you start getting asked to send information that's more sensitive? Even my own father fell victim to a—thank goodness— a smaller dollar scam, but he fell victim and we have these conversations all the time.

0
💬 0

1032.565 - 1036.99 Ian Mitchell

So we are all at risk and we all need to work together to protect each other.

0
💬 0

1038.091 - 1047.903 Narrator

Learn more about how to protect yourself at vox.com slash Zelle. And when using digital payment platforms, remember to only send money to people you know and trust.

0
💬 0

1052.759 - 1073.433 Neil I. Patel

Welcome back. So we've talked about what the companies that make software and hardware can do about the creation of deepfakes. And it seems like the best answer we have right now is adding watermarks to AI generated content. But the real problems are in distribution. Let's talk about the distribution side, which is, I think, where the real problem lies.

0
💬 0

1073.593 - 1092.56 Neil I. Patel

If you make a bunch of deepfakes at your house with Donald Trump and you never share them with anyone, what harm have you caused? You start telling lies about both presidential candidates and you share them widely on social platforms that go viral. Now you have caused a giant external problem. And so it feels like the pressure to regulate this stuff is going to come back. to the platforms.

0
💬 0

1092.741 - 1103.915 Neil I. Patel

And again, I think the desire of the platforms to moderate waxes and wanes, and it feels low right now, maybe it'll ramp back up. Where are the platforms right now with the deep fake distribution problem?

0
💬 0

1104.873 - 1122.144 Adi Robertson

So far, it feels like the consensus is we're going to label this and that's going to be mainly our job is that we're going to try to make sure we catch it. There are cases where, say, maybe you get it taken down if you haven't disclosed if you're a company or you're buying a political ad.

0
💬 0

1122.584 - 1130.629 Adi Robertson

But broadly, the idea seems to be we want to give people information and tell them that this is manipulated and then they can make their own call.

0
💬 0

1131.704 - 1153.175 Neil I. Patel

The one platform that stands out to me, and you and I have talked about this a lot, is YouTube, which has an enormous dependency on the music industry. The music industry is not happy about AI-generated covers using the voices of its artists. Notably, Fake Drake caused a huge kerfuffle. Universal Music Group went to Google. They announced some rules.

0
💬 0

1153.796 - 1172.129 Neil I. Patel

They're going to prevent deepfakes or allow some licensing of the money to flow back to the artists. That is a very private industry. sort of licensing scheme that sits outside of the law, it sits outside of the other platforms. Do you think YouTube is going to lead the way here because it has that pressure or is that just a one-off to the music industry?

0
💬 0

1172.881 - 1193.72 Adi Robertson

I feel like the incentives for something like the music industry and for things that are basically aesthetic deep fakes, I think the incentives there are very different than they are for political manipulated imagery. That a lot of the question with YouTube is, okay, you are basically parodying someone in a way that may or may not legally be considered parody.

0
💬 0

1194.341 - 1217.09 Adi Robertson

And we can make a deal where that person really, all they want is to get paid, right? And maybe they want something sufficiently controversial taken down. But if you give them some money, they'll be happy. That's just not really the issue at hand with political generated images. The problem there is around reputation. It's around people who do, at least in theory, care about.

0
💬 0

1217.49 - 1229.055 Adi Robertson

Did this person say this thing? Is this true? So I just I don't know that you could cut a deal with Joe Biden that says every time Joe Biden, you make something up about him, he gets a penny.

0
💬 0

1229.075 - 1243.531 Neil I. Patel

I feel like politicians are always asking for donations. Maybe that's just the way to solve the problem. You just pay for the lies as long as politicians are getting paid. From what I gather, particularly Donald Trump, as long as he's getting paid, he might be cool with it.

0
💬 0

1243.992 - 1259.723 Neil I. Patel

Outside of YouTube, which does have this big dependency on the labels and licensing and so I think is leading the way on having any particular policy with specificity, do any of the other platforms have ideas here that are more than we have an existing policy and we'll see how it works with the problem?

0
💬 0

1260.487 - 1275.617 Adi Robertson

There are companies that are signing on to an initiative called C2PA, which is – we were talking about watermarks earlier. It's a content provenance system. It includes a watermark that has metadata, and the goal there is the idea that –

0
💬 0

1276.657 - 1297.476 Adi Robertson

you will be able to at least tell where something has come from and whether it's manipulated, and that it's supposed to be this broad industry-wide, everybody has the same watermark system, so it's very easy to look at an image and pop it in and check and see if it has the watermark. That's one of the leading ways the AI industry at this point is trying to deal with truth and provenance.

0
💬 0

1298.703 - 1317.032 Neil I. Patel

Is that shipped yet? I feel like we've been talking about C2PA and the content authenticity initiative. We've had Dana Rau, Adobe's general counsel, on the show. He said the deepfake of the pope wearing a puffer jacket was, quote, a catalyzing event for content provenance, which is an amazing quote, and all credit to Dana for it. But there's people wanting to do it.

0
💬 0

1317.392 - 1322.555 Neil I. Patel

There's the activity we see, and then there's shipping it. Has that shipped anywhere? Can you go look at it?

0
💬 0

1323.123 - 1338.648 Adi Robertson

Watermarks are rolling out places. OpenAI adopted them in mid-February. They're starting to appear on Dolly Images. You can look at them in Photoshop. I think the problem is more that this thing rolled out, but really most people are not going to care enough to check.

0
💬 0

1340.69 - 1360.672 Neil I. Patel

Well, unless the labels are in their face, right? Unless you are scrolling on TikTok and you see something and TikTok puts a big label right over the top of it that says, this is AI, which doesn't seem to be happening anywhere. OpenAI did Sora, its video generator. The videos are compelling, although they have some extremely terrifying errors in them.

0
💬 0

1361.092 - 1372.103 Neil I. Patel

There's not like a big label on them that says this is AI generated. Like they're going to travel without the context of open AI having produced them to promote their AI tool. And even that seems dangerous to me.

0
💬 0

1372.89 - 1398.186 Adi Robertson

Yeah, a lot of the issue with C2PA right now is that you have to actually go in and pop it into a tool to check the metadata, which is just an extra step that the vast majority of people are not going to take. And that, yes, it's not applying to things like Sora yet, at least as far as OpenAI has told us. So there is not a really prominent in your face, this thing is AI in most cases.

0
💬 0

1398.646 - 1400.027 Neil I. Patel

Can you remove those watermarks?

0
💬 0

1400.97 - 1412.654 Adi Robertson

I mean, a screenshot tool, as far as I can tell, can remove the watermarks. And I think there are ways that you can end up just stripping these things out. It's very, very hard to create a perfect watermarking system.

0
💬 0

1413.654 - 1429.559 Adi Robertson

Just so much of this relies on a lot of Adobe's argument being, well, eventually we want people to expect this, that it's going to be like, you know, if you look in your browser and you get a certificate warning that says there's no certificate for this webpage, maybe you won't trust the webpage.

0
💬 0

1430.259 - 1446.138 Adi Robertson

I think the goal they're going for is the idea that everyone will be trained into expecting a certain level of authenticity. And I just don't think we're at that point. In some ways, these problems already existed. Photoshopped nudes have been a thing that has been used to harass people for a very long time.

0
💬 0

1446.739 - 1464.111 Adi Robertson

And Photoshopped images of politicians and manipulated content about politicians is nothing new. A thing AI definitely does is scale up those problems a huge amount and make us confront them in a way that it was maybe easier to ignore before, especially by adding a technology that

0
💬 0

1465.35 - 1482.967 Adi Robertson

The people who are creating the technology are trying to hype up in a way that sounds terrifying and world-ending for other reasons. The problem with a lot of this is that you can't apply the kinds of paradigms that social media has because it really only takes one person with one paradigm.

0
💬 0

1483.84 - 1504.391 Adi Robertson

capability to do a thing like it takes one bad actor to make something that you can spread in huge variations that are hard to recognize across huge numbers of platforms i think that raises slightly different problems than say there's this big account on social media that's spreading something well all right facebook can ban them

0
💬 0

1506.826 - 1524.347 Neil I. Patel

So we've talked a lot about what the platforms can do, what the AI companies can do as private companies, these initiatives like content authenticity. That's a private framework. The government has some role to play here, right? The big challenges are the First Amendment issues.

0
💬 0

1524.827 - 1549.788 Neil I. Patel

exists in the united states and that really directly restricts the government from making speech regulations and then you know we are a patchwork of state and federal laws what is the current state of deep fake law are you allowed to do it are you not allowed to do it how does that work it's mostly a small patchwork of past laws a huge number of laws of varying likely constitutionality that people are debating and not a whole lot at the federal level

0
💬 0

1550.348 - 1568.506 Adi Robertson

There are a lot of different problems that AI-generated images pose, and there are cases where individual states have passed rules for individual problems. There are a few states that incorporate, say, non-consensual AI pornography laws into anti-pornography.

0
💬 0

1569.046 - 1592.763 Adi Robertson

general revenge non-consensual porn rules there are a few states with rules about how you have to disclose manipulated images for elections and there are some attempts in congress to create a larger framework or in the say fec and other government regulatory agencies to create a larger framework but we just are still in this large chaotic period of people debating things

0
💬 0

1593.567 - 1616.365 Neil I. Patel

Let's start with non-consensual deepfake pornography, which I think everybody agrees is a bad thing that we should find ways to regulate away. A solution to revenge porn broadly on the internet is copyright law, right? You have made these files with your phone or your computer. someone else distributes them, you say, no, those are mine. Copyright law will let me take this down.

0
💬 0

1616.605 - 1637.055 Neil I. Patel

When you have deep fakes, there is no original. It's not a copy of something that you've made or that you own. You have to come up with some other way to do it, right? You have to come up with some other mechanism, whether that's just a law that says this is not right, or it's some other idea like the right to your likeness. Where have most of the existing laws landed there?

0
💬 0

1638.234 - 1661.743 Adi Robertson

The copyright issue is actually something that came up with non-synthetic, non-consensual pornography because, say, if one of your partners took a nude picture of you, you don't own that picture. And that was already just a huge loophole that... legislators have spent about a decade trying to make laws that meaningfully address nonconsensual pornography that's not AI-generated.

0
💬 0

1662.223 - 1679.132 Adi Robertson

And the frameworks they've come up with are getting ported over to AI-generated imagery, that a lot of it is about, all right, this is harassment, this is obscenity, this is some other kind of speech restriction that is allowable. A lot of nonconsensual pornography is a kind of sexual harassment thing.

0
💬 0

1679.352 - 1693.939 Adi Robertson

that we can find ways to wall outside protected speech, and that we can target it in a way where it's not going to necessarily take down huge amounts of other speech, the way that, say, just banning all AI-generated images would.

0
💬 0

1694.86 - 1703.264 Neil I. Patel

There are a bunch of state laws around non-consensual AI-generated pornography. What states are those, and is there any federal law on the horizon?

0
💬 0

1703.988 - 1722.901 Adi Robertson

There's California, New York is another, there's Texas. At the federal level, there have been attempts to work this into, it's not a criminal statute, but there is a federal civil right to sue if you have non-synthetic, non-consensual point of view.

0
💬 0

1723.061 - 1740.479 Adi Robertson

And there have been attempts to work AI into that and say, all right, well, it's not a crime, but it's a thing that you can sue for under, I believe it is the reauthorization of the Violence Against Women Act. And then there have been attempts to, like you mentioned, just tie all of this into a big federal likeness law.

0
💬 0

1740.94 - 1765.841 Adi Robertson

So likeness laws are a mostly state-level thing that says, all right, you can't take Taylor Swift and make her look like she's advertised your Instant Pot. And so there have been some attempts to make a federal version of that. But likeness laws are really tricky because they're so much broader that they end up catching things like parody and satire and commentary.

0
💬 0

1766.561 - 1773.223 Adi Robertson

And they're just, I think, much riskier than trying to create really targeted, specific use laws.

0
💬 0

1773.965 - 1795.164 Neil I. Patel

The idea that someone should be in absolute control of a photograph of themselves has only gained prominence over time. Emily Ratajkowski wrote that great essay for The Cut several years ago, where she said, a street photographer took a photo of me, and I put it on my Instagram, and I'm suing him to say that I can take his photo because it's a photo of me.

0
💬 0

1795.444 - 1815.772 Neil I. Patel

And that is a very complicated argument in that case. But the idea that you should be in total control of any photo of you, I think a lot of people just instinctively believe that. And I think likeness law is what makes that have legal force. But you're saying, oh, there's some stuff here you wouldn't want to pull under that umbrella.

0
💬 0

1816.539 - 1838.043 Adi Robertson

If you're talking about non-synthetic stuff, then there are all kinds of documentaries and news reports and really things that people have a public interest in making where you don't want to give someone the right to say you cannot depict me in a thing. And in that case, it's doing something I actually did. But AI-generated images raise the whole other question, which is, OK, so what –

0
💬 0

1838.323 - 1856.337 Adi Robertson

Where do you draw the line between an AI-generated image and a Photoshop of someone and a drawing of someone? Should you not be able to depict any person in a situation that they don't want to be depicted in, even if that situation is something that would just broadly be protected by the First Amendment? Yeah.

0
💬 0

1856.937 - 1872.869 Adi Robertson

Like, where do we think that the societal benefit of preventing a particular usage that hurts someone should be able to override the interest we have in just being able to write about or create images of someone?

0
💬 0

1872.889 - 1876.672 Neil I. Patel

We have to take another quick break. We'll be right back.

0
💬 0

1885.589 - 1904.063 The Refinery

Support for this show comes from The Refinery at Domino. Look, location and atmosphere are key when deciding on a home for your business, and The Refinery can be that home. If you're a business leader, specifically one in New York, The Refinery at Domino is an opportunity to claim a defining part of the New York City skyline.

0
💬 0

1904.503 - 1924.676 The Refinery

The Refinery at Domino is located in Williamsburg, Brooklyn, and it offers all the perks and amenities of a brand new building while being a landmark address that dates back to the mid-19th century. It's 15 floors of Class A modern office environment housed within the original urban artifact, making it a unique experience for inhabitants as well as the wider community.

0
💬 0

1925.236 - 1944.647 The Refinery

The building is outfitted with immersive interior gardens, a glass-domed penthouse lounge, and a world-class event space. The building is also home to a state-of-the-art Equinox with a pool and spa, world-renowned restaurants, and exceptional retail. As New Yorkers return to the office, the refinery at Domino can be more than a place to work.

0
💬 0

1945.027 - 1952.251 The Refinery

It can be a magnetic hub fit to inspire your team's best ideas. Visit therefinery.nyc for a tour.

0
💬 0

1954.54 - 1972.812 Kate Cox

Support for this show comes from The Refinery at Domino. Location and atmosphere are key when deciding on a home for your business, and The Refinery can be that home. If you're a business leader, specifically one in New York, The Refinery at Domino is an opportunity to claim a defining part of the New York City skyline.

0
💬 0

1973.533 - 1993.999 Kate Cox

The Refinery at Domino is located in Williamsburg, Brooklyn, and it offers all the perks and amenities of a brand new building while being a landmark address that dates back to the mid-19th century. It's 15 floors of Class A modern office environment, housed within the original urban artifact, making it a unique experience for inhabitants as well as the wider community.

0
💬 0

1994.775 - 2015.164 Kate Cox

The building is outfitted with immersive interior gardens, a glass-domed penthouse lounge, and a world-class event space. The building is also home to a state-of-the-art equinox with a pool and spa, world-renowned restaurants, and exceptional retail. As New Yorkers return to the office, the refinery at Domino can be more than a place to work.

0
💬 0

2015.624 - 2022.527 Kate Cox

It can be the magnetic hub fit to inspire your team's best ideas. Visit therefinery.nyc for a tour.

0
💬 0

2024.488 - 2061.558 Microsoft

Support for this episode comes from Microsoft. Thankfully, there's Microsoft Defender, all-in-one protection that can help keep our families safe when they're online. Microsoft Defender makes it easy to safeguard your family's data, identities, and privacy with a single security app across your devices.

0
💬 0

2062.559 - 2071.844 Microsoft

Take control of your family's security by helping to protect their personal info, computers, and phones from hackers and scammers. Visit Microsoft365.com slash Defender.

0
💬 0

2080.674 - 2095.316 Neil I. Patel

We're back talking with Verge Policy Editor Addy Robertson about why it's really hard to limit either the creation or sharing of deepfakes. So that's the philosophical policy debate. You want to restrict this because in many cases it can be used to do very bad things.

0
💬 0

2095.757 - 2116.021 Neil I. Patel

There's some things that we absolutely want to forbid, but if we let that get too wide, we're going to start running into people's everyday speech. We're going to start running into absolutely constitutionally protected speech, like documentaries, like news reporting. That's pretty blurry. And I think the audience here, you should sit with that because that is pretty blurry.

0
💬 0

2116.902 - 2127.45 Neil I. Patel

On the flip side, there are two bills in Congress right now that purport to restrict on this stuff. There's something called the No Fakes Act, which is Chris Coons, Marsha Blackburn, Amy Klobuchar, Tom Tillis.

0
💬 0

2127.931 - 2145.467 Neil I. Patel

And then after the Taylor Swift situation on X, there's something called the Defiance Act, which stands for the Disrupt Explicit Forged Images and Non-Consensual Edits Act, which is quite a lot of words. Do they go towards solving the problem? Do you see differences there? Do you see them as being an effective approach?

0
💬 0

2146.476 - 2170.411 Adi Robertson

The two bills are a little bit the thing I talked about where one of them, the Defiance Act, is really specifically about we want to look at non-consensual pornographic images. We define what that means. And we think that this particular thing we can carve out. There are lots of questions about, in general, how far you want to go in banning synthetic images. But it's really targeting porn.

0
💬 0

2171.926 - 2192.972 Adi Robertson

sexually explicit pictures of real people. And I think things like the No Fakes Act, I believe there's also something called the No AI Fraud Act. These are much broader. We just think that you shouldn't be able to fake images of people. And we're going to make some carve-outs there, but the fundamental idea is that we want to create a giant federal likeness law.

0
💬 0

2192.993 - 2209.358 Adi Robertson

And I think that's much riskier because that is much more a, we start from a point of saying that you shouldn't be able to fake an image of someone without their permission. And then we're going to create some opt-ins with some options where you're allowed to do it.

0
💬 0

2210.358 - 2223.082 Adi Robertson

And I think that raises so many more of these questions that do we really want to create a federal ban on being able to create a fictionalized image of somebody?

0
💬 0

2224.95 - 2248.058 Neil I. Patel

That is the likeness law approach to it, which has big problems of its own. Another approach we've heard about on Decoder is rooted in defamation law. So Barack Obama was on Decoder. He said there are different rules for public figures than 13-year-old girls. We're going to treat them differently. We should have different rules for what you can do with a public figure than teenagers.

0
💬 0

2248.379 - 2265.253 Neil I. Patel

We should have different rules for what is clearly political commentary and satire versus cyberbullying. And then Senator Brian Schatz was recently on and he said something similar. Is defamation where this goes? Where it's, hey, you made a deep fake of me. Maybe it's my likeness. But you're actually defaming my character. And you did it on purpose.

0
💬 0

2265.834 - 2274.643 Neil I. Patel

And that rises to the level of you knowingly telling a lie about me. And defamation law is what's going to punish you for this instead of some law about my likeness.

0
💬 0

2275.834 - 2297.856 Adi Robertson

Defamation law has already come up with text-based generative AI, where if something like ChatGPT tells a lie about you, are you allowed to say they're making things up about me? I can sue. And I think the benefit of defamation law is that there is a really huge framework for hammering out when exactly something is an acceptable lie and when it's not.

0
💬 0

2298.476 - 2318.623 Adi Robertson

That, all right, well, would a reasonable person believe that this thing is actually true, or is this really obviously political commentary and hyperbole? I think that we're on at least more solid ground there than we are with just saying, all right, fine, you know what, just ban deepfakes. I do think that still defamation law is complicated.

0
💬 0

2318.663 - 2340.782 Adi Robertson

And every time you open up defamation law, as Donald Trump has once suggested, you end up getting a situation where in a lot of cases it's very powerful people. throwing the law against people who don't necessarily have the money to defend themselves. And in general, I'm cagey about trying to open up defamation law.

0
💬 0

2341.402 - 2348.108 Adi Robertson

But it is a place where at least you have a framework that people have spent a very long time talking about.

0
💬 0

2350.198 - 2369.404 Neil I. Patel

One thing we constantly say here at The Verge is that copyright law is the only real law on the internet because it's the only speech regulation that everyone just kind of accepts. Defamation law is not a speech regulation that everyone just accepts. It has boundaries. The cases go back and forth. The idea that there should be a federal right to likeness doesn't even exist yet.

0
💬 0

2370.124 - 2390.263 Neil I. Patel

So that feels like it will be very controversial if it happens as a speech regulation. But at the heart of that is the First Amendment, right? People have such a strong belief in the First Amendment that saying the government should make a speech regulation, even if something is really bad, is an extraordinarily complicated and high barrier to cross. Do you see that changing in the context of AI?

0
💬 0

2391.05 - 2406.457 Adi Robertson

When a new technology comes along, there are a large number of people who don't necessarily think about it in terms of the First Amendment or of speech protections where you're able to say, oh, well, this thing is just categorically different. We've never had technology like this before. The First Amendment shouldn't apply. And.

0
💬 0

2408.238 - 2437.458 Adi Robertson

I always hope we don't go there with the technology because I think that the problems that come from just blanket outlawing it tend to be really huge. I don't know. I think that we're still waiting to see how disruptive AI tech actually is. We're still waiting to see whether it is meaningfully different from something like Photoshop, even though it seems intuitively like it absolutely should be.

0
💬 0

2438.438 - 2440 Adi Robertson

but we're still waiting to see that play out.

0
💬 0

2440.68 - 2463.98 Neil I. Patel

We spent a lot of time talking about the visual side of it. We're going to make deep fake images. Those images have real world harms, especially to young people, especially young women in an election cycle, making it seem like Trump or Biden fell down the stairs could be very damaging. There's also the voice side of it, right? Where having Joe Biden do AI generated robocalls is a real problem. Uh,

0
💬 0

2465.131 - 2472.983 Neil I. Patel

or convincing people on TikTok that Trump said something he didn't say is a real problem. Do any of these laws address that aspect of it?

0
💬 0

2473.567 - 2496.564 Adi Robertson

If we're talking about non-internet systems like robocalls, then we actually have laws that aren't really even related to most of the things we've talked about. There's a rule called the TCPA that's an anti-robocall law, basically, that says you cannot just bombard people with synthetic phone calls. And it was recently decided that, all right, should artificial voices there include voice cloning?

0
💬 0

2496.604 - 2512.721 Adi Robertson

Yes, obviously. Right. So at this point, things like robocall laws apply to AI. And so if you're going to try to get Joe Biden calling a bunch of people and telling them not to vote, that's something that just can be regulated under a very longstanding law.

0
💬 0

2513.141 - 2516.845 Neil I. Patel

What about a fake Joe Biden, Joe Rogan podcast on TikTok?

0
💬 0

2517.626 - 2536.909 Adi Robertson

That raises really all the same questions that image based questions. AI raises. In some ways, it's probably going to be harder to detect and regulate against at a non-legal platform level because so much stuff is optimized for detecting images. And so in some ways, it's maybe even a thornier problem.

0
💬 0

2538.03 - 2555.388 Adi Robertson

And also, on the other hand, voice impersonation was a thing before this, that there were really good impersonators of celebrity voices. And so I think that that might be a technically harder problem to fix, but I think that the legal questions it raises are very similar.

0
💬 0

2556.513 - 2575.452 Neil I. Patel

All right. So we've arrived at what I would describe as existential crisis. Many, many problems. One set of things that should clearly be illegal, which deepfake nonconsensual pornography seems like it should clearly be legal. Everything else seems kind of up for grabs. How should people be thinking about these challenges as they go into this election year?

0
💬 0

2576.253 - 2602.662 Adi Robertson

There are a bunch of really hard technical issues. And a lot of those issues are going to be irrelevant to people because so many people do not check even very obviously fake information because of a variety of reasons that do not have anything to do with it being undetectable as a fake. I think that trying to actually make yourself care about whether something is true is...

0
💬 0

2603.602 - 2624.638 Adi Robertson

is in a lot of ways a bigger, more important step than making sure that nothing false is capable of being produced. I think that's the place where huge numbers of people have fallen down and where huge numbers of people have fallen for things. And I think that while all of these other issues we've been talking about are incredibly important,

0
💬 0

2625.599 - 2633.622 Adi Robertson

This is just a big individual psychological thing that people can do on their own that does not come naturally to a lot of us.

0
💬 0

2637.223 - 2652.208 Neil I. Patel

Thanks again to Verge Policy Editor Addie Robertson for joining us on Decoder. These issues are so challenging and she always helps me understand them so much more clearly. If you have thoughts about this episode or what you'd like to hear more of, you can email us at decoder at theverge.com. We really do read every email and we talk about them quite a bit.

0
💬 0

2652.668 - 2666.742 Neil I. Patel

You can also hit me up directly on threads on that reckless 1280. We also have a TikTok. It's a lot of fun. Check it out. It's at DecoderPod. If you like Decoder, please share it with your friends. Subscribe wherever you get your podcasts. If you really love the show, hit us with that five-star review. Decoder is a production of Verge and part of the Vox Media Podcast Network.

0
💬 0

2667.062 - 2673.549 Neil I. Patel

Today's episode was produced by Kate Cox and Nick Statt. It was edited by Callie Wright. The Decoder music is by Breakmaster Cylinder. We'll see you next time.

0
💬 0
Comments

There are no comments yet.

Please log in to write the first comment.