Decoder is off this week for a short end-of-summer break. We’ll be back with both our interview and explainer episodes after the Labor Day holiday. In the meantime we thought we’d re-share an explainer that’s taken on a whole new relevance in the last couple weeks, about deepfakes and misinformation. In February, I talked with Verge policy editor Adi Robertson how the generative AI boom might start fueling a wave of election-related misinformation, especially deepfakes and manipulated media. It’s not been quite an apocalyptic AI free-for-all out there. But the election itself took some really unexpected turns in these last couple of months. Now we’re heading into the big, noisy home stretch, and use of AI is starting to get really weird — and much more troublesome. Links: The AI-generated hell of the 2024 election | The Verge AI deepfakes are cheap, easy, and coming for the 2024 election | Decoder Elon Musk posts deepfake of Kamala Harris that violates X policy | The Verge Donald Trump posts a fake AI-generated Taylor Swift endorsement | The Verge X’s Grok now points to government site after misinformation warnings | The Verge Political ads could require AI-generated content disclosures soon | The Verge The Copyright Office calls for a new federal law regulating deepfakes | The Verge How AI companies are reckoning with elections | The Verge The lame AI meme election | Axios Deepfakes' parody loophole | Axios Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Callie Wright. Our supervising producer is Liam James. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Amgen, a leading biotechnology company, needed a global financial company to facilitate funding and acquisition to broaden Amgen's therapeutic reach, expand its pipeline, and accelerate bringing new and innovative medicines to patients in need globally.
They found that partner in Citi, whose seamlessly connected banking, markets, and services businesses can advise, finance, and close deals around the world. Learn more at citi.com slash client stories. Creating highly advanced AI is complicated, especially if you don't have the right storage, a critical but often overlooked catalyst for AI infrastructures.
Solidigm is storage optimized for the AI era. Offering bigger, faster, and more energy efficient solid state storage, Solidigm delivers the capability to meet capacity, performance, and energy demands across your AI data workloads. AI requires a different approach to storage. Solidigm is ready for everything the AI era demands. Learn more at storageforai.com. Thank you.
Hello and welcome to Decoder. I'm Neil Apatow, Editor-in-Chief of The Verge, and Decoder is my show about big ideas and other problems. We're on a short summer break right now. We'll be back after Labor Day with new interview and explainer episodes, and pretty excited about what's on the schedule.
In the meantime, we thought we'd reshare an explainer that's taken on a whole new relevance these last couple weeks. It's about deepfakes and misinformation. In February, I talked with Verge Policy Editor Addie Robertson about how the generative AI boom might start fueling a wave of election-related misinformation, especially AI-generated deepfakes and manipulated media.
At the time, the biggest news in AI fakes was a robocall with an AI version of Joe Biden's voice. It's been about six months, and while there hasn't been quite an apocalyptic AI free-for-all out there, the election itself took some pretty unexpected turns.
Now we're headed into the big, noisy homestretch before Election Day, and the use of AI is starting to get really weird and much more troublesome. Elon Musk's X has become the de facto platform for AI-generated misinformation, and Trump's campaign has also started to boost its own AI use.
For the most part, these AI stunts have been mostly for cheap laughs, unless Taylor Swift decides to sue the Trump campaign. But as you'll hear Addy and I talk about in this episode, there are not a lot of easy avenues to regulate this kind of media without running headlong into the First Amendment, especially when dealing with political commentary around public figures.
There's a lot going on here and a lot of very difficult problems to solve that haven't really changed since we last talked about it. Okay, AI deepfakes during the 2024 election. Here we go. Addie Robertson, how are you doing? Hi, good. You've been tracking this conversation for a very long time. It does seem like there's more nuance in the disinformation conversation than before.
It's not just Russia made people elect Trump, which is I think where we were in 2016. Can you just give a background? What's the shape of how people are thinking of disinformation right now?
We've had, I think, about three major in the U.S. presidential election cycles where disinformation was a huge issue. 2016, where there was a lot of discussion in the aftermath about, all right, was there foreign meddling in the election? Were people being influenced by these coordinated campaigns?
There was 2020 where deepfakes technically did exist, but generative AI tools were just not as sophisticated. They were not as easy to use. They were not nearly as prevalent. And so there was a huge conversation about what role do social platforms play in preventing general manipulated information. And there was, in a lot of ways, a huge crack down, there was the entire issue of Stop the Steal.
There are these large movements that are trying to just lie about who won the election. What do we do? There were questions about, all right, do we kick Trump off social networks? These were the locus of debate. And now it's 2024, and we have in some ways I think a little bit of a hangover from 2020 where platforms are really tired of policing this.
And so they're dealing with, all right, how do we renegotiate this for the 2024 election? And then you have this whole other layer of generative AI imagery, whether or not you want to technically call it deepfakes is like an open question. And then there are all the layers of how that gets disseminated and whether that turbo charges a bunch of issues that already existed.
So the platforms are getting tired of this is worth talking about for one second longer. There was a huge rush of how do we make ultra sophisticated content moderation systems? And I think the pinnacle of that rush was Facebook setting up its oversight board, which is effectively a Supreme court of content moderation decisions. And that was seen as, okay, Facebook is as big as a state.
It has the revenue of a state. It's a government now. It's going to have some government-like functions to regulate speech on its platforms. That didn't pan out, right? The oversight board exists. It moves very slowly. It's, I think, hard for the average Facebook user or average Instagram user to think there's a moderating force involved. on content moderation on this platform.
It's the same as it ever was for the user perspective, as far as I can tell.
Yeah, I think that the oversight board, what it tends to do is that is maybe comparable to the Supreme Court is do sophisticated outside thinking about what does a consistent moderation framework look like. But like the Supreme Court in real life does not adjudicate every single complaint that you have. You have a whole bunch of other courts. Facebook doesn't have really those other courts.
Facebook has a gigantic army of moderators who don't always necessarily even see its policies. So, yeah, it's this very macro level. We're going to do the big thinking. But also, even at the time, there was the question of, is this really just Facebook or now Meta kind of outsourcing and kicking the can out of its court and putting the hard questions on other people?
I wanted to bring that up specifically because that was the pinnacle, I think, of the big thinks about content moderation. Since that time, the companies have all done lots of layoffs. We've seen trust and safety diminished across the board. I think most famously with Twitter, now X, Elon Musk basically decimated the trust and safety team on that platform.
It appears Lindy Iaccarino is trying to bring some of them back. But the idea that content moderation is the thing these platforms have to do is no longer in vogue, I think, the way it was when the Oversight Board was created.
Yeah. And part of this is also political, that there was a huge, largely, again, in the U.S., right-wing backlash to this, that this was the kind of thing that would get a state attorney general mad at you and get a congressional committee to investigate you. as it ended up doing with pre-Musk Twitter. I think that, yeah, there became a real political price for doing this as well.
Since then, some platforms have let Donald Trump back on. They've said, all right, but we cannot possibly moderate every single lie on this. We're going to just wash our hands of whether you're saying the election was stolen or not.
So yeah, let's go through the new players and how they might turbocharge the disinformation conversation now. And then let's talk about what might be done about it. I do just wanna emphasize for the audience, it doesn't seem like the desire to regulate information on social networks is nearly as high as it has been in the past.
And I think that is an important thing to start with because the technical challenges are so hard that wanting to solve them is actually an important component of the puzzle. But let's talk about the actual technical challenges and the players behind them. OpenAI, that's a new company. There are a lot of other new companies in various stages of controversy.
So Midjourney exists, that is an image generator. Stability AI exists, another image generator. They're getting sued by Getty for using the Getty library, allegedly, to train images that look like Getty photos. in this context, very important. MidiJourney is getting sued as well. OpenAI is getting sued for training on the New York Times database.
Just a few days ago, OpenAI announced Sora, its text-to-video generator, which frankly makes terrifying videos. All those videos look terrifying. But you can see how... A enterprising scammer could immediately use that to make something that looks like compelling video, something that didn't happen. All of these companies talk about AI alignment, making sure AI doesn't go off the rails.
Where's the AI industry broadly on we shouldn't do political deepfakes? Do they have a unified point of view or are they all in different spots? How's that working out?
The companies are in slightly different spots, but they actually have come together. Very recently, they've signed an accord that says, look, we're going to take this seriously. They've announced policies that are varying levels of strictness, but tend toward, you If you're a major AI company, you're going to try to prevent people from creating information that maybe looks bad for public figures.
Maybe you ban producing images of recognizable figures altogether or you try to. And you have something in your terms of service that says if you're using this for political causes or if you're creating deceptive content, then we can kick you off.
One challenge here in America is the existence of the First Amendment. The Biden administration recently did an executive order saying, don't do bad stuff. And these companies all agreed, OK, we won't do bad stuff. But the United States government is pretty restricted in saying you can't make deep fakes of other people because the First Amendment exists and it can't control that speech directly.
Are the companies rising to that challenge and saying we will self-regulate because the government can't directly regulate us?
We don't know necessarily how good the enforcement of it is going to be, but the companies seem so far pretty open to the idea of self-regulation, in part because I think this isn't just a civic-minded political thing. Dealing with unflattering stuff about real people is just a minefield they don't want. That said, there are also just there are open source tools.
Stability AI is pretty close to open source. It's pretty easy to go in and make a thing that builds on it that maybe strips away the safeguards you get in its public version. So it's just not quite equivalent to the way that, say, social platforms are able to completely control what's on their platforms.
So you've got a handful of companies with varying sets of restrictions, a broad general industry consensus. We shouldn't do deep fakes. And then you have reality, which is that there are deep fakes of celebrities all the time. There are deep fakes of teenage girls in high schools that are getting circulated on private message boards. It is happening. What can be done to stop it?
Does stopping mean that you're just trying to limit the spread to where this doesn't become a huge viral thing that a bunch of people see, but it still may be technically possible to create this? Or do you want to say, all right, we have a zero tolerance policy. If anything is created with any tool anywhere, even if someone keeps it to themselves, that is unconscionable.
Let's start with the second one, which I think has the more obvious answer. Saying no deepfakes are allowed whatsoever seems like it comes with a host of unintended consequences about speech and also seems like impossible to actually accomplish. because of the existence of open source tools. Like I think, how would you actually enforce a total ban on deepfakes?
And the answer is that Intel and Apple and Qualcomm and Nvidia and AMD and every other chip maker have to prevent it somehow at the hardware level, which seems impossible. The only example I can think of where we have allowed that to happen is that Adobe Photoshop won't allow you to scan and print a dollar bill
which makes sense, like it broadly makes sense that Adobe made that deal with the government. But it's also like, well, that's about as far as you should let that go, right? Like there's a point where you wanna make a parody image of a Biden or a Trump, and you don't want Photoshop saying, hey, are you manipulating a real person's face? Like you're saying, that seems way too far.
So a total ban seems implausible. There are other things you could do at the creation step. OpenAI bans certain prompts that violates their terms of service. Getty won't let you talk about celebrities at all. If you type a celebrity's name or basically any proper noun into the Getty image generator, it just tells you to go away.
There's a lot of conversation about watermarking this stuff and making sure that real images have a watermark that say they're real images and AI images have a watermark that say they're AI images. Do any of those seem promising?
The most promising argument I've heard for these is the idea that you can – and this is an argument that Adobe has made to me – train people to expect a watermark. And so if what you're saying is we want to make it impossible to make these images without a watermark, I think that raises the same problems that we just talked about, which if anyone can make –
tweaked version of an open source tool, they can just say, don't put a watermark in. But I think that you could potentially get into a situation where you require a watermark. And if something doesn't have a watermark, there are ways that its design or its spread or people trusting it are severely hobbled. That's maybe the best argument for it, I've heard.
The part where you restrict the prompts. OpenAI restricts the prompts, Getty restricts the prompts. It's pretty easy to get around that, right? The Taylor Swift deep fakes that were floating around on Twitter, they were made in a Microsoft tool and Microsoft just had to get rid of the prompts. Is that just a forever cat and mouse game on the restrict the prompts idea?
It does seem like the thing about a lot of generative AI tools is that there are just vast, vast numbers of ways to get them to do something. People are going to find those. Software bugs are a thing that has been a problem. Zero-day exploits have been a problem on computers for a very long time. And this feels like it kind of falls into that category.
That's the creation side. We need to take a quick break. When we come back, we'll get into the harder problem, distribution.
Support for this podcast and the following message is brought to you by E-Trade from Morgan Stanley. Take control of your financial future with E-Trade. No matter what kind of investor you are, our tools and resources can help you be ready for what's next. Now when you open an account, you can get up to $1,000 with a qualifying deposit. Terms apply. Learn more at etrade.com slash vox.
Investing involves risks. Morgan Stanley Smith, Barney LLC, member SIPC. E-Trade is a business of Morgan Stanley.
They're not writers, but they help their clients shape their businesses' financial stories. They're not an airline, but their network connects global businesses in nearly 180 local markets. They're not detectives, but they work across businesses to uncover new financial opportunities for their clients. They're not just any bank. They are Citi. Learn more at Citi.com slash WeAreCiti.
That's C-I-T-I dot com slash WeAreCiti.
Fox Creative. This is advertiser content from Zelle. When you picture an online scammer, what do you see?
For the longest time, we have these images of somebody sitting crouched over their computer with a hoodie on, just kind of typing away in the middle of the night. And honestly, that's not what it is anymore.
That's Ian Mitchell, a banker turned fraud fighter. These days, online scams look more like crime syndicates than individual con artists. And they're making bank. Last year, scammers made off with more than $10 billion.
It's mind-blowing to see the kind of infrastructure that's been built to facilitate scamming at scale. There are hundreds, if not thousands, of scam centers all around the world. These are very savvy business people. These are organized criminal rings. And so once we understand the magnitude of this problem, we can protect people better.
One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them. But Ian says one of our best defenses is simple.
We need to talk to each other. We need to have those awkward conversations around what do you do if you have text messages you don't recognize? What do you do if you start getting asked to send information that's more sensitive? Even my own father fell victim to a—thank goodness— a smaller dollar scam, but he fell victim and we have these conversations all the time.
So we are all at risk and we all need to work together to protect each other.
Learn more about how to protect yourself at vox.com slash Zelle. And when using digital payment platforms, remember to only send money to people you know and trust.
Welcome back. So we've talked about what the companies that make software and hardware can do about the creation of deepfakes. And it seems like the best answer we have right now is adding watermarks to AI generated content. But the real problems are in distribution. Let's talk about the distribution side, which is, I think, where the real problem lies.
If you make a bunch of deepfakes at your house with Donald Trump and you never share them with anyone, what harm have you caused? You start telling lies about both presidential candidates and you share them widely on social platforms that go viral. Now you have caused a giant external problem. And so it feels like the pressure to regulate this stuff is going to come back. to the platforms.
And again, I think the desire of the platforms to moderate waxes and wanes, and it feels low right now, maybe it'll ramp back up. Where are the platforms right now with the deep fake distribution problem?
So far, it feels like the consensus is we're going to label this and that's going to be mainly our job is that we're going to try to make sure we catch it. There are cases where, say, maybe you get it taken down if you haven't disclosed if you're a company or you're buying a political ad.
But broadly, the idea seems to be we want to give people information and tell them that this is manipulated and then they can make their own call.
The one platform that stands out to me, and you and I have talked about this a lot, is YouTube, which has an enormous dependency on the music industry. The music industry is not happy about AI-generated covers using the voices of its artists. Notably, Fake Drake caused a huge kerfuffle. Universal Music Group went to Google. They announced some rules.
They're going to prevent deepfakes or allow some licensing of the money to flow back to the artists. That is a very private industry. sort of licensing scheme that sits outside of the law, it sits outside of the other platforms. Do you think YouTube is going to lead the way here because it has that pressure or is that just a one-off to the music industry?
I feel like the incentives for something like the music industry and for things that are basically aesthetic deep fakes, I think the incentives there are very different than they are for political manipulated imagery. That a lot of the question with YouTube is, okay, you are basically parodying someone in a way that may or may not legally be considered parody.
And we can make a deal where that person really, all they want is to get paid, right? And maybe they want something sufficiently controversial taken down. But if you give them some money, they'll be happy. That's just not really the issue at hand with political generated images. The problem there is around reputation. It's around people who do, at least in theory, care about.
Did this person say this thing? Is this true? So I just I don't know that you could cut a deal with Joe Biden that says every time Joe Biden, you make something up about him, he gets a penny.
I feel like politicians are always asking for donations. Maybe that's just the way to solve the problem. You just pay for the lies as long as politicians are getting paid. From what I gather, particularly Donald Trump, as long as he's getting paid, he might be cool with it.
Outside of YouTube, which does have this big dependency on the labels and licensing and so I think is leading the way on having any particular policy with specificity, do any of the other platforms have ideas here that are more than we have an existing policy and we'll see how it works with the problem?
There are companies that are signing on to an initiative called C2PA, which is – we were talking about watermarks earlier. It's a content provenance system. It includes a watermark that has metadata, and the goal there is the idea that –
you will be able to at least tell where something has come from and whether it's manipulated, and that it's supposed to be this broad industry-wide, everybody has the same watermark system, so it's very easy to look at an image and pop it in and check and see if it has the watermark. That's one of the leading ways the AI industry at this point is trying to deal with truth and provenance.
Is that shipped yet? I feel like we've been talking about C2PA and the content authenticity initiative. We've had Dana Rau, Adobe's general counsel, on the show. He said the deepfake of the pope wearing a puffer jacket was, quote, a catalyzing event for content provenance, which is an amazing quote, and all credit to Dana for it. But there's people wanting to do it.
There's the activity we see, and then there's shipping it. Has that shipped anywhere? Can you go look at it?
Watermarks are rolling out places. OpenAI adopted them in mid-February. They're starting to appear on Dolly Images. You can look at them in Photoshop. I think the problem is more that this thing rolled out, but really most people are not going to care enough to check.
Well, unless the labels are in their face, right? Unless you are scrolling on TikTok and you see something and TikTok puts a big label right over the top of it that says, this is AI, which doesn't seem to be happening anywhere. OpenAI did Sora, its video generator. The videos are compelling, although they have some extremely terrifying errors in them.
There's not like a big label on them that says this is AI generated. Like they're going to travel without the context of open AI having produced them to promote their AI tool. And even that seems dangerous to me.
Yeah, a lot of the issue with C2PA right now is that you have to actually go in and pop it into a tool to check the metadata, which is just an extra step that the vast majority of people are not going to take. And that, yes, it's not applying to things like Sora yet, at least as far as OpenAI has told us. So there is not a really prominent in your face, this thing is AI in most cases.
Can you remove those watermarks?
I mean, a screenshot tool, as far as I can tell, can remove the watermarks. And I think there are ways that you can end up just stripping these things out. It's very, very hard to create a perfect watermarking system.
Just so much of this relies on a lot of Adobe's argument being, well, eventually we want people to expect this, that it's going to be like, you know, if you look in your browser and you get a certificate warning that says there's no certificate for this webpage, maybe you won't trust the webpage.
I think the goal they're going for is the idea that everyone will be trained into expecting a certain level of authenticity. And I just don't think we're at that point. In some ways, these problems already existed. Photoshopped nudes have been a thing that has been used to harass people for a very long time.
And Photoshopped images of politicians and manipulated content about politicians is nothing new. A thing AI definitely does is scale up those problems a huge amount and make us confront them in a way that it was maybe easier to ignore before, especially by adding a technology that
The people who are creating the technology are trying to hype up in a way that sounds terrifying and world-ending for other reasons. The problem with a lot of this is that you can't apply the kinds of paradigms that social media has because it really only takes one person with one paradigm.
capability to do a thing like it takes one bad actor to make something that you can spread in huge variations that are hard to recognize across huge numbers of platforms i think that raises slightly different problems than say there's this big account on social media that's spreading something well all right facebook can ban them
So we've talked a lot about what the platforms can do, what the AI companies can do as private companies, these initiatives like content authenticity. That's a private framework. The government has some role to play here, right? The big challenges are the First Amendment issues.
exists in the united states and that really directly restricts the government from making speech regulations and then you know we are a patchwork of state and federal laws what is the current state of deep fake law are you allowed to do it are you not allowed to do it how does that work it's mostly a small patchwork of past laws a huge number of laws of varying likely constitutionality that people are debating and not a whole lot at the federal level
There are a lot of different problems that AI-generated images pose, and there are cases where individual states have passed rules for individual problems. There are a few states that incorporate, say, non-consensual AI pornography laws into anti-pornography.
general revenge non-consensual porn rules there are a few states with rules about how you have to disclose manipulated images for elections and there are some attempts in congress to create a larger framework or in the say fec and other government regulatory agencies to create a larger framework but we just are still in this large chaotic period of people debating things
Let's start with non-consensual deepfake pornography, which I think everybody agrees is a bad thing that we should find ways to regulate away. A solution to revenge porn broadly on the internet is copyright law, right? You have made these files with your phone or your computer. someone else distributes them, you say, no, those are mine. Copyright law will let me take this down.
When you have deep fakes, there is no original. It's not a copy of something that you've made or that you own. You have to come up with some other way to do it, right? You have to come up with some other mechanism, whether that's just a law that says this is not right, or it's some other idea like the right to your likeness. Where have most of the existing laws landed there?
The copyright issue is actually something that came up with non-synthetic, non-consensual pornography because, say, if one of your partners took a nude picture of you, you don't own that picture. And that was already just a huge loophole that... legislators have spent about a decade trying to make laws that meaningfully address nonconsensual pornography that's not AI-generated.
And the frameworks they've come up with are getting ported over to AI-generated imagery, that a lot of it is about, all right, this is harassment, this is obscenity, this is some other kind of speech restriction that is allowable. A lot of nonconsensual pornography is a kind of sexual harassment thing.
that we can find ways to wall outside protected speech, and that we can target it in a way where it's not going to necessarily take down huge amounts of other speech, the way that, say, just banning all AI-generated images would.
There are a bunch of state laws around non-consensual AI-generated pornography. What states are those, and is there any federal law on the horizon?
There's California, New York is another, there's Texas. At the federal level, there have been attempts to work this into, it's not a criminal statute, but there is a federal civil right to sue if you have non-synthetic, non-consensual point of view.
And there have been attempts to work AI into that and say, all right, well, it's not a crime, but it's a thing that you can sue for under, I believe it is the reauthorization of the Violence Against Women Act. And then there have been attempts to, like you mentioned, just tie all of this into a big federal likeness law.
So likeness laws are a mostly state-level thing that says, all right, you can't take Taylor Swift and make her look like she's advertised your Instant Pot. And so there have been some attempts to make a federal version of that. But likeness laws are really tricky because they're so much broader that they end up catching things like parody and satire and commentary.
And they're just, I think, much riskier than trying to create really targeted, specific use laws.
The idea that someone should be in absolute control of a photograph of themselves has only gained prominence over time. Emily Ratajkowski wrote that great essay for The Cut several years ago, where she said, a street photographer took a photo of me, and I put it on my Instagram, and I'm suing him to say that I can take his photo because it's a photo of me.
And that is a very complicated argument in that case. But the idea that you should be in total control of any photo of you, I think a lot of people just instinctively believe that. And I think likeness law is what makes that have legal force. But you're saying, oh, there's some stuff here you wouldn't want to pull under that umbrella.
If you're talking about non-synthetic stuff, then there are all kinds of documentaries and news reports and really things that people have a public interest in making where you don't want to give someone the right to say you cannot depict me in a thing. And in that case, it's doing something I actually did. But AI-generated images raise the whole other question, which is, OK, so what –
Where do you draw the line between an AI-generated image and a Photoshop of someone and a drawing of someone? Should you not be able to depict any person in a situation that they don't want to be depicted in, even if that situation is something that would just broadly be protected by the First Amendment? Yeah.
Like, where do we think that the societal benefit of preventing a particular usage that hurts someone should be able to override the interest we have in just being able to write about or create images of someone?
We have to take another quick break. We'll be right back.
Support for this show comes from The Refinery at Domino. Look, location and atmosphere are key when deciding on a home for your business, and The Refinery can be that home. If you're a business leader, specifically one in New York, The Refinery at Domino is an opportunity to claim a defining part of the New York City skyline.
The Refinery at Domino is located in Williamsburg, Brooklyn, and it offers all the perks and amenities of a brand new building while being a landmark address that dates back to the mid-19th century. It's 15 floors of Class A modern office environment housed within the original urban artifact, making it a unique experience for inhabitants as well as the wider community.
The building is outfitted with immersive interior gardens, a glass-domed penthouse lounge, and a world-class event space. The building is also home to a state-of-the-art Equinox with a pool and spa, world-renowned restaurants, and exceptional retail. As New Yorkers return to the office, the refinery at Domino can be more than a place to work.
It can be a magnetic hub fit to inspire your team's best ideas. Visit therefinery.nyc for a tour.
Support for this show comes from The Refinery at Domino. Location and atmosphere are key when deciding on a home for your business, and The Refinery can be that home. If you're a business leader, specifically one in New York, The Refinery at Domino is an opportunity to claim a defining part of the New York City skyline.
The Refinery at Domino is located in Williamsburg, Brooklyn, and it offers all the perks and amenities of a brand new building while being a landmark address that dates back to the mid-19th century. It's 15 floors of Class A modern office environment, housed within the original urban artifact, making it a unique experience for inhabitants as well as the wider community.
The building is outfitted with immersive interior gardens, a glass-domed penthouse lounge, and a world-class event space. The building is also home to a state-of-the-art equinox with a pool and spa, world-renowned restaurants, and exceptional retail. As New Yorkers return to the office, the refinery at Domino can be more than a place to work.
It can be the magnetic hub fit to inspire your team's best ideas. Visit therefinery.nyc for a tour.
Support for this episode comes from Microsoft. Thankfully, there's Microsoft Defender, all-in-one protection that can help keep our families safe when they're online. Microsoft Defender makes it easy to safeguard your family's data, identities, and privacy with a single security app across your devices.
Take control of your family's security by helping to protect their personal info, computers, and phones from hackers and scammers. Visit Microsoft365.com slash Defender.
We're back talking with Verge Policy Editor Addy Robertson about why it's really hard to limit either the creation or sharing of deepfakes. So that's the philosophical policy debate. You want to restrict this because in many cases it can be used to do very bad things.
There's some things that we absolutely want to forbid, but if we let that get too wide, we're going to start running into people's everyday speech. We're going to start running into absolutely constitutionally protected speech, like documentaries, like news reporting. That's pretty blurry. And I think the audience here, you should sit with that because that is pretty blurry.
On the flip side, there are two bills in Congress right now that purport to restrict on this stuff. There's something called the No Fakes Act, which is Chris Coons, Marsha Blackburn, Amy Klobuchar, Tom Tillis.
And then after the Taylor Swift situation on X, there's something called the Defiance Act, which stands for the Disrupt Explicit Forged Images and Non-Consensual Edits Act, which is quite a lot of words. Do they go towards solving the problem? Do you see differences there? Do you see them as being an effective approach?
The two bills are a little bit the thing I talked about where one of them, the Defiance Act, is really specifically about we want to look at non-consensual pornographic images. We define what that means. And we think that this particular thing we can carve out. There are lots of questions about, in general, how far you want to go in banning synthetic images. But it's really targeting porn.
sexually explicit pictures of real people. And I think things like the No Fakes Act, I believe there's also something called the No AI Fraud Act. These are much broader. We just think that you shouldn't be able to fake images of people. And we're going to make some carve-outs there, but the fundamental idea is that we want to create a giant federal likeness law.
And I think that's much riskier because that is much more a, we start from a point of saying that you shouldn't be able to fake an image of someone without their permission. And then we're going to create some opt-ins with some options where you're allowed to do it.
And I think that raises so many more of these questions that do we really want to create a federal ban on being able to create a fictionalized image of somebody?
That is the likeness law approach to it, which has big problems of its own. Another approach we've heard about on Decoder is rooted in defamation law. So Barack Obama was on Decoder. He said there are different rules for public figures than 13-year-old girls. We're going to treat them differently. We should have different rules for what you can do with a public figure than teenagers.
We should have different rules for what is clearly political commentary and satire versus cyberbullying. And then Senator Brian Schatz was recently on and he said something similar. Is defamation where this goes? Where it's, hey, you made a deep fake of me. Maybe it's my likeness. But you're actually defaming my character. And you did it on purpose.
And that rises to the level of you knowingly telling a lie about me. And defamation law is what's going to punish you for this instead of some law about my likeness.
Defamation law has already come up with text-based generative AI, where if something like ChatGPT tells a lie about you, are you allowed to say they're making things up about me? I can sue. And I think the benefit of defamation law is that there is a really huge framework for hammering out when exactly something is an acceptable lie and when it's not.
That, all right, well, would a reasonable person believe that this thing is actually true, or is this really obviously political commentary and hyperbole? I think that we're on at least more solid ground there than we are with just saying, all right, fine, you know what, just ban deepfakes. I do think that still defamation law is complicated.
And every time you open up defamation law, as Donald Trump has once suggested, you end up getting a situation where in a lot of cases it's very powerful people. throwing the law against people who don't necessarily have the money to defend themselves. And in general, I'm cagey about trying to open up defamation law.
But it is a place where at least you have a framework that people have spent a very long time talking about.
One thing we constantly say here at The Verge is that copyright law is the only real law on the internet because it's the only speech regulation that everyone just kind of accepts. Defamation law is not a speech regulation that everyone just accepts. It has boundaries. The cases go back and forth. The idea that there should be a federal right to likeness doesn't even exist yet.
So that feels like it will be very controversial if it happens as a speech regulation. But at the heart of that is the First Amendment, right? People have such a strong belief in the First Amendment that saying the government should make a speech regulation, even if something is really bad, is an extraordinarily complicated and high barrier to cross. Do you see that changing in the context of AI?
When a new technology comes along, there are a large number of people who don't necessarily think about it in terms of the First Amendment or of speech protections where you're able to say, oh, well, this thing is just categorically different. We've never had technology like this before. The First Amendment shouldn't apply. And.
I always hope we don't go there with the technology because I think that the problems that come from just blanket outlawing it tend to be really huge. I don't know. I think that we're still waiting to see how disruptive AI tech actually is. We're still waiting to see whether it is meaningfully different from something like Photoshop, even though it seems intuitively like it absolutely should be.
but we're still waiting to see that play out.
We spent a lot of time talking about the visual side of it. We're going to make deep fake images. Those images have real world harms, especially to young people, especially young women in an election cycle, making it seem like Trump or Biden fell down the stairs could be very damaging. There's also the voice side of it, right? Where having Joe Biden do AI generated robocalls is a real problem. Uh,
or convincing people on TikTok that Trump said something he didn't say is a real problem. Do any of these laws address that aspect of it?
If we're talking about non-internet systems like robocalls, then we actually have laws that aren't really even related to most of the things we've talked about. There's a rule called the TCPA that's an anti-robocall law, basically, that says you cannot just bombard people with synthetic phone calls. And it was recently decided that, all right, should artificial voices there include voice cloning?
Yes, obviously. Right. So at this point, things like robocall laws apply to AI. And so if you're going to try to get Joe Biden calling a bunch of people and telling them not to vote, that's something that just can be regulated under a very longstanding law.
What about a fake Joe Biden, Joe Rogan podcast on TikTok?
That raises really all the same questions that image based questions. AI raises. In some ways, it's probably going to be harder to detect and regulate against at a non-legal platform level because so much stuff is optimized for detecting images. And so in some ways, it's maybe even a thornier problem.
And also, on the other hand, voice impersonation was a thing before this, that there were really good impersonators of celebrity voices. And so I think that that might be a technically harder problem to fix, but I think that the legal questions it raises are very similar.
All right. So we've arrived at what I would describe as existential crisis. Many, many problems. One set of things that should clearly be illegal, which deepfake nonconsensual pornography seems like it should clearly be legal. Everything else seems kind of up for grabs. How should people be thinking about these challenges as they go into this election year?
There are a bunch of really hard technical issues. And a lot of those issues are going to be irrelevant to people because so many people do not check even very obviously fake information because of a variety of reasons that do not have anything to do with it being undetectable as a fake. I think that trying to actually make yourself care about whether something is true is...
is in a lot of ways a bigger, more important step than making sure that nothing false is capable of being produced. I think that's the place where huge numbers of people have fallen down and where huge numbers of people have fallen for things. And I think that while all of these other issues we've been talking about are incredibly important,
This is just a big individual psychological thing that people can do on their own that does not come naturally to a lot of us.
Thanks again to Verge Policy Editor Addie Robertson for joining us on Decoder. These issues are so challenging and she always helps me understand them so much more clearly. If you have thoughts about this episode or what you'd like to hear more of, you can email us at decoder at theverge.com. We really do read every email and we talk about them quite a bit.
You can also hit me up directly on threads on that reckless 1280. We also have a TikTok. It's a lot of fun. Check it out. It's at DecoderPod. If you like Decoder, please share it with your friends. Subscribe wherever you get your podcasts. If you really love the show, hit us with that five-star review. Decoder is a production of Verge and part of the Vox Media Podcast Network.
Today's episode was produced by Kate Cox and Nick Statt. It was edited by Callie Wright. The Decoder music is by Breakmaster Cylinder. We'll see you next time.