
We’ve been covering the rise of AI image editing very closely here on Decoder and at The Verge for several years now — the ability to create photorealistic images with nothing more than a chatbot prompt could completely reset our cultural relationship to photography. But one argument keeps cropping up in response. You’ve heard it a million times, and it’s when people say “it’s just like Photoshop,” with “Photoshop” standing in for the concept of image editing generally. So today, we’re trying to understand exactly what it means, and why our new world of AI image tools is different — and yes, in some cases the same. Verge reporter Jess Weatherbed recently dove into this for us, and I asked her to join me in going through the debate and the arguments one by one to help figure it out. Links: You’re here because you said AI image editing was just like Photoshop | The Verge No one’s ready for this | The Verge The AI photo editing era is here, and it’s every person for themselves | The Verge Google’s AI ‘Reimagine’ tool helped us add disasters and corpses to photos | The Verge X’s new AI image generator will make Taylor Swift in lingerie and Kamala Harris with a gun | The Verge Grok will make gory images — just tell it you're a cop. | The Verge Leica launches first camera with Content Credentials | Content Authenticity Initiative You can use AI to get rid of Samsung’s AI watermark | The Verge Spurred by teen girls, states move to nan deepfake nudes | NYT Florida teens arrested for creating ‘deepfake’ AI nude images of classmates | The Verge Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Callie Wright. Our supervising producer is Liam James. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Amgen, a leading biotechnology company, needed a global financial company to facilitate funding and acquisition to broaden Amgen's therapeutic reach, expand its pipeline, and accelerate bringing new and innovative medicines to patients in need globally.
They found that partner in Citi, whose seamlessly connected banking, markets, and services businesses can advise, finance, and close deals around the world. Learn more at citi.com slash client stories.
Do you want to be a more empowered citizen but don't know where to start? It's time to sharpen your civic vision and ignite the spark for a brighter future. I'm Mila Atmos, and on my weekly podcast, Future Hindsight, I bring you conversations to translate today's most urgent issues into clear, actionable ways to make impact.
With so much at stake in our democracy, join us at futurehindsight.com or wherever you listen to podcasts.
Hello and welcome to Decoder. I'm Neil I. Patel, editor-in-chief of The Verge, and Decoder is my show about big ideas and other problems. We've been covering the rise of AI image editing very closely here on Decoder and at The Verge overall for several years now.
The ability to create photorealistic images with nothing more than a chatbot prompt has the potential to completely reset our cultural relationship to photography. And in particular, how much we instinctively trust photos to reflect the truth.
Every time we write about it or talk about it, we are loudly reminded by multiple people that the debate over image editing and the inherent truth of photos is nothing new. That that debate has existed for as long as photography itself has existed. And in particular, it's raged since digital photo editing tools have been widely available.
You know this argument, you've heard it a million times, you've seen it in our comments, you've seen it on social media. It's when people say, it's just like Photoshop, with Photoshop standing in for the concept of image editing generally.
So today we're going to dive into this argument, that response, and try to understand exactly what it means, and why our new world of AI image tools is different, and yes, in some cases the same. Verge reporter Jess Weatherbed recently dove into this for us, and I asked her to join me in going through the debate and the arguments one by one to help figure it out.
Because sure, in many ways, AI image editing really is just a faster, easier version of what people have been doing in Photoshop. Even Photoshop itself now has Adobe's AI tech Firefly built right into it. But making powerful tools instantly accessible to everyone has big consequences, and we are seeing those consequences right now.
Say you want to generate an image of Donald Trump pointing a gun at Kamala Harris. You can just ask Grok, the AI chatbot that Elon Musk has built right into X. It'll do it with no issues because it has very few of the same filters that have prevented competing AI products from depicting politicians or outright violence.
High schools around the country are being rocked by so-called nudification apps that make it trivial to create deepfake nudes of female students. The effort to do this is trivial, it is already happening, and it is fast becoming a national crisis. You might say that these are old problems.
Photoshop and other image editors have let you do all sorts of awful things with no guardrails for years, and fake celebrity photographs have been a problem for decades. And even in the days before computers, you could create convincing fake images to mislead people. But the difference with generative AI is its scale.
We're giving these sophisticated tools to everyone with very little oversight, and that has landed us firmly in uncharted territory. And I'll just be direct here. My view is that people say it's just like Photoshop to diminish these new problems that AI tools are already causing, to make them seem already solved or worse, not worth considering.
But I would remind you that we have hardly solved any of these problems when it really was just Photoshop. And that any proposed solution that requires everyone to just understand that every image they see is edited isn't really much of a solution at all. So what are the problems? What are the differences? And what are the solutions? That's all in the AI versus Photoshop debate. Here we go.
Jess Weatherbed, welcome to Decoder. Hi, thank you. Your article about why AI image editing is not like Photoshop, I think, has some of the most comments on any piece we published in the last year. Let's walk through what's going on with AI image editing and why it is in some ways similar to the Photoshop debate and in many ways is different.
And the way I want to do this is by just going through the main arguments we hear. I assume the Decoder audience is aware of what AI image editing is and is aware of what Photoshop is. And honestly, the argument about whether images on the internet are real or fake or can be trusted has indeed been raging for years. But they're not quite the same thing.
So let's start with the first argument, which is you can already manipulate images in Photoshop. How does that relate to what we're seeing right now with AI image editing?
If we're taking it as a technical argument, it's not incorrect. Photoshop has been able to make edits like this for a very, very long time, but it completely ignores the main issue of all of this, which is scale.
I think you could probably point to that just being the single aggregator that makes all this worse, which is if you wanted to do this kind of thing in Photoshop or any editing software, really, there were so many skill and financial barriers stopping the general populace from doing so, which usually meant that those edits had to be done with intent. That could have been good or bad intent.
It just meant that there was a little bit more of a thought process behind it. You had to invest in Photoshop or find a free version of it, learn how to use all of the complex tools. And it would maybe take you, I don't know, like 20 minutes, maybe an hour sometimes to make a very photorealistic manipulation that you could use anywhere.
For nefarious purposes, if you wanted to, that way inclined, AI kind of scraps all of that. It's now landed on phones and web apps, and you can just open a window, tell it what you want to see, and it'll put it there for you. It completely changes the entire landscape of what we've been dealing with.
Let me push on the idea of scale for one second. It is possible to go on a website and hire people to use Photoshop for you for no dollars, for $5, and get an image that you want. You can go on to Reddit today and say, I have a picture of myself with my ex-partner, but the picture of me is really good. Can someone Photoshop them out of this picture?
And people will do it on Reddit right now for you for free. Is it that the scale is enabled by how accessible and cheap it is to do on a phone with an AI tool? Is it... that even asking someone else to do it for you for cheap or free requires you to say what you want, which is a deterrent to some people. Where does the scale come from?
The accessibility is the main thing. And there's going to be a lot of, especially the stuff that I think people are concerned about, right? Like the kind of like slop that you're seeing on X or Twitter or whatever we're calling it these days. You're not going to go onto Reddit and go, I want to make a very memeable picture of a politician doing something grotesque. You're going to get pushed back.
You might get banned from that platform. You're going to be barred from contacting those people in general. If you're given the means to just... enact that without any barriers, which is effectively what this tech is doing. Half of the advertising for this is you can recreate anything that you can imagine.
I think that was pretty much Google's entire kind of ad campaign for this to the point where they've called their latest tool Reimagine. Like it's meant to be a creativity thing. People don't always have good imagination intentions. And yeah, describing that to an actual human being is going to be uncomfortable or potentially illegal in some cases. Those barriers are just completely removed.
AI, again, has no qualms about whether you should be asking it to make something. It's just going to do it.
That wasn't always the case, right? For a while there, it felt like AI chatbots and image generators would be restricted. They simply wouldn't generate images of real people, especially not politicians like Trump or Biden. Some, like Google's Gemini, only very recently started letting paid users generate images of real people. And that still has a lot of restrictions around it.
But we've come a long way. Now it's pretty easy to get your hands on an AI tool. Maybe it's open source. Maybe it's built right into X, like the AI chatbot Grok, that will do pretty much whatever you want with very few filters. The other thing I always think about is the notion that the people who want to run scams or tell lies are often quite lazy.
And so Photoshop has existed for a long time and some of the fake images we've seen have been so obviously fake and they still have the effect people with bad intentions want, which is to confuse people or pretend they really were sharks floating around a city in a flood or whatever it is. Maybe the increased quality of the fake images from AI isn't the point.
Like the technology to be reasonably convincing about a lie has already existed, even though Photoshop is somewhat hard to use. Do you think that the scale combined with the increased visual fidelity is actually meaningfully different?
I think so. I think the barrier for who was being convinced by this stuff before was there was still a good chunk of people that would see something within a split second and just automatically assume that, yeah, it's accurate without actually looking at the fact that they've got, I don't know, eight fingers on each hand or something, even in the early days of image generation.
But the improvement to the general technology has definitely exacerbated it. And I think there is also an error of people will just believe something with a narrative if it works for them anyway. Yeah. they don't actually need that much substantive evidence towards it.
If it's somewhere aligned on a position that they already have and they can use that as an illustrative guide, they're just going to run with it. So I do think it's making things considerably worse now that it's convincing the people that used to actually try and keep an eye out for obvious fakes. But yeah, it's definitely exacerbating the issue considerably.
One of the other arguments we constantly hear is that people adapted to Photoshop People understood that Photoshop existed. There were some controversies around fashion magazines using Photoshop to beautify people, and they got in trouble, and we stopped using it.
We've developed a sort of cultural vocabulary or understanding of Photoshop and how it can be used and should be used, and it will adapt to this as well. Does that feel convincing to you?
No, not at all. I think the Photoshop argument itself almost feels outdated already because it's become synonymous with the act of image editing in general. But the actual software itself was a barrier that hasn't really existed for a little while now since we've started having filtering apps and Facetune and stuff on our mobile phones. What it did was create a real societal problem.
As soon as we had Photoshop introduced, there became this idea that we need to be chasing perfection in everything that we do. And that was pushed in marketing images and that was like negatively impacting body image and stuff. But it was also this idea that a picture should be perfect.
And you should maybe feel bad if it's not or that you're less if you're not able to take stunning images like that. And ever since we've had these filtering applications, which were incredibly limited, you could make yourself have smoother skin, maybe slimmer jawline. They weren't massive changes.
I think it's become this kind of thing that if we're given a tool now that can effectively run with the limits of our imagination, people are going to do so.
Thank you.
The opportunity of AI brings a multitude of challenges associated with rapid growth and expansion. SolidEye makes sure your data storage can keep up with your AI ambitions. Energy usage challenges and physical space limits are only going to increase as AI scales.
Outdated storage infrastructure based on spinning disks, also known as hard drives, which are decades-old technology, simply can't keep up. Solidigm offers power-efficient solid-state storage spanning from the highest capacities to the highest performance. It's storage-optimized for the AI era, meaning you can finally scale your AI with fewer limitations.
Growing AI ambitions require a different approach to storage. Solidigm solid-state storage can help you bring your AI ambitions to life. Learn more at storageforai.com. They're not writers, but they help their clients shape their businesses' financial stories. They're not an airline, but their network connects global businesses in nearly 180 local markets.
They're not detectives, but they work across businesses to uncover new financial opportunities for their clients. They're not just any bank. They are Citi. Learn more at Citi.com slash WeAreCiti. That's C-I-T-I dot com slash WeAreCiti.
Fox Creative. This is advertiser content from Zelle. When you picture an online scammer, what do you see?
For the longest time, we have these images of somebody sitting crouched over their computer with a hoodie on, just kind of typing away in the middle of the night. And honestly, that's not what it is anymore.
That's Ian Mitchell, a banker turned fraud fighter. These days, online scams look more like crime syndicates than individual con artists. And they're making bank. Last year, scammers made off with more than $10 billion.
It's mind-blowing to see the kind of infrastructure that's been built to facilitate scamming at scale. There are hundreds, if not thousands, of scam centers all around the world. These are very savvy business people. These are organized criminal rings. And so once we understand the magnitude of this problem, we can protect people better.
One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them. But Ian says one of our best defenses is simple.
We need to talk to each other. We need to have those awkward conversations around what do you do if you have text messages you don't recognize? What do you do if you start getting asked to send information that's more sensitive? Even my own father fell victim to a, thank goodness, a smaller dollar scam, but he fell victim and we have these conversations all the time.
So we are all at risk and we all need to work together to protect each other.
Learn more about how to protect yourself at vox.com slash zelle. And when using digital payment platforms, remember to only send money to people you know and trust.
We're back with Verge reporter Jess Weatherman. Before the break, we were discussing the idea that Photoshop and AI image editing exist as distinct forms of technology, because one has historically required a good amount of money, time, and effort, and the other now just requires you to have a smartphone.
That's created a problem of ease and scale that arguably makes the AI version of image editing a much harder issue for society to keep in check. You just heard us introduce the next argument, which is that society as a whole will adapt to AI the way we did with pre-AI image editing, and it'll be fine. But it's worth asking if the results of ubiquitous image editing have actually been fine.
After all, image editing is now a cornerstone of how we share photos online, whether it's Facetune and a selfie for Snapchat, or applying a filter to a sunset on Instagram.
It's also worth considering the major change with generative AI, which is that we're going to go from editing images to get them closer to perfect to simply creating images of whatever we want, or in some cases, manipulating images by using AI to create a realistic component that was never actually there. That's very different.
At least with Photoshop or app filters, you have to start with something. With AI, you can simply start with a prompt. How we think about the boundary between these technologies is important because it's moving quickly and not everyone can seem to agree on where it stops being one thing and starts being another.
I think if we're talking about this specific conversation, there's definitely a line. If we're saying Photoshop itself, that was a revolutionary technology, right? That was what we're saying kicked all of this off on a digital aspect. Photo manipulation existed for way longer than that, but that was incredibly difficult. That wasn't just technical know-how, that was physical skills.
You had to know how to cut tiny little film rolls and be able to manipulate them in magnified kind of situations, whereas Photoshop enabled you to do that without having to have all the expensive tools. It was still expensive software, but it's gradually become more accessible. I completely agree with you.
There's a difference between almost manipulating something that had substance to begin with and just creating something that is a complete false reality. It never existed to begin with. And there's something to be said about intent there, which is why I think accessibility is one of the more significant concerns. Because even if you wanted to Photoshop an image or something, I don't know, like,
a lion in a playground or something, again, nefarious, you would usually do so for a giggle. Are you really going to have that much effort to learn all the necessary skills? Pick up the tech, again, free or paid, which is expensive if you want to go down that route. Go through all that effort for a joke.
for something that you think is going to be funny whereas now there isn't any effort required so the idea that you're changing an image with the fact that these two things are similar isn't a bad argument but it does completely ignore the fact that the accessibility and the scale of these things is the issue at hand not what's actually happening
One major element of adaptation is just that everyone starts to do it, right? So it becomes normalized. But one of the results of the Instagram era in particular is that there are now arguably multiple generations of young women and increasingly young men who have body image and other mental health issues because of what they see on social media.
They see filtered photos, edited images, people's faces looking smoother and sharper than theirs ever could. Everyone's just constantly having to question if what they're seeing is in fact genuine or manipulated. Is that a form of adaptation that eventually enough people will learn to distrust what they see online that they stop trying to measure themselves against it?
Because the mental health issues that teens are now facing doesn't seem ignorable and it feels like AI is only going to make it worse.
It's such a washy answer to give, but they really exist in context, right? Girls and boys, men and women alike, with all the body image issues that they're wanting to have themselves perceived on social media to their enclosed audience as being, in some way, the ideal version of themselves.
And I don't think that they're bad for that, from being exposed to all these idealistic images over the last decade or so, ever since the advent of social media has happened, especially Instagram, which is really, really bad for this. But it is the same argument effectively as to what we're doing right now. It's how much of this are we willing to accept before it becomes problematic?
Because the more this landslides into a situation of how much of this is actually reality is we can't trust the images in front of us at this point. So I think it's aligned. It was almost symptomatic in a way.
Do you think that adaptation has happened though that people understand, oh, we should not believe what we see on social media? Because that's part of the argument here, that at some point people will stop believing what they see on the internet and that will be the adaptation and then this won't be a problem.
I think it's definitely happened. Like it's firmly happened already.
If any celebrity uploads a photo, you can go through the comments, something like, I don't know, the Kardashians, you can go through all of their stuff and you'll have people microanalyzing the background of their images, trying to find any kind of distortion to see whether they've made their waist slimmer or their bum bigger or whatever.
So people are already very, very heavily scrutinising the images that they're seeing in front of them. But I don't think they understand quite on a big scale yet the level of changes that can be applied because at this point we've come to accept that body imaging in a way or body editing in these contexts is just something that people do. That people feel bad about themselves.
They might make their teeth whiter. They might make their face smaller. All of these kind of things. I don't think people are going to rush to the...
realization if they're looking at a picture that the entire background might not be real or that again if someone's sharing a viral image of something that's meant to be i'm trying to think of something that's not going to be too controversial but like an explosion in a bin or something that's going to stir up local news they're not going to look at that immediately and go that's fake because why would you there has to be a narrative behind that they understand the body side of that so there is an adaptation of it definitely
Do you think that adaptation is just resignation? We understand that people are going to put filters on their faces and on their bums and we're just going to be resigned to it and not believe anything. And now maybe we'll have fake explosions in trash cans.
I think so. Yeah. One of the most common phrases I keep seeing throughout this argument a lot is the cat is out of the bag. It's already out. You can't do anything about it. And it's very defeatist. It's almost like people half understand the scale of the issue and they can't see a future where there's going to be anything that happens about it.
So they've just discounted any kind of reality in front of them from this point. So any picture that they see online from this point is fake. People are already at that point. Definitely. I don't think a lot of people are there.
Not in terms of, especially older relatives, when we talk about people on Facebook, like nans or aunts or stuff, sharing obviously fake images as if they were going to be real. There's the whole crab Jesus memes of early mid-journey edits and things like that. But... There aren't enough people, I think, scrutinising the right things at this point.
They've only come to the understanding that we can change our physical appearances because they understand why they would do so. They can't really understand why people would use these tools for nefarious purposes, despite the fact that when you think about it for a second... It's quite easy and you wouldn't even have to be that evil about it.
I saw a picture that our colleague Chris Welch had made using the reimagined tool on the Pixel 9. And I believe it was like a roach added to a takeaway or something, a takeout. Sorry, Britishism. And immediately I was like, that's going to probably cause some problems for small businesses, right? If I could just order some food, log a complaint and say, hey, look, you've added something to my food.
I want my money back. There are so many smaller level challenges. scams and bad intentions that could be fulfilled using these tools that people wouldn't have had the energy or the effort or even the skill if we're being realistic to carry out before because it's not worth the effort. But if I can do that in five seconds, it's quite tempting for some people, I imagine. There's a slippery slope.
The other adaptation that you and I have talked about a lot, that our audience talks about a lot, that the industry has talked a lot about a lot is technical adaptation. So you make the edit and reimagine on your pixel. That gets flagged as being an AI edit. And then when you send it to someone, they know that they're not looking at reality. They're looking at some AI edit.
There's a variety of systems that are attempting to do this. The most important is called the Content Authenticity Initiative. It's run by a standards organization. You've reported on this. It doesn't seem to be doing anything. We've talked about it a lot, but now, as you say, the cat's out of the bag, and the systems to verify whether images are real or fake are not ready.
Are they close to being ready? Is there something that would stop them from shipping today alongside these image editors?
The way that I've had it explained is that the system itself is absolutely fine. The way that it's supposed to be rolling out is fine. It makes complete sense. The problem is that you need everyone on board for it to work, which is just completely unrealistic. You would need to get people that have completely different ideologies in terms of
pro or against generative AI technology to get on board with saying we're going to make this a robust identification system. And unless you have all of the camera makers, all of the editing software makers, all of the online platforms, not just the social media ones, but like literally everywhere where you would see an image on board with this one system.
I don't think it's going to make a meaningful difference. And it doesn't really solve the issue that they do have at the minute, which is how to provide that information that an image is AI generated or just edited using AI in a concise and meaningful way without giving people a wall of text, which no one is going to read, right?
I'm not going to sit there and read a paragraph of what has gone into a picture on my friend's holiday snaps or something.
Meta tried this when they did their Made With AI labels and it went pretty badly because they provided no context and photographers who'd used a couple of what sounded like pretty basic tools in Adobe Photoshop that use a very standard version of generative AI, something like background removal or object select.
The system was effectively flagging that and saying, hey, you've used Gen AI, therefore we're going to tag your entire image as being made with this stuff. And it gives the wrong intentions because people see that word AI and they immediately think, OK, fake, this entire thing is fake. So there's no nuance in it.
That's a much more complicated problem to solve alongside the existing issue of already trying to get thousands of organisations and companies on board with this one system adopting it. So it's not going to be a bulletproof solution. They know that as well.
The Content Authenticity Initiative were fully transparent with the fact that they said this might help, but it's not going to solve the problem. So we're kind of in a bit of a mess at the minute. There's a bit of a bind. There should have been a lot of things that were put into place before AI got to this point. And now that we're playing catch up, we're too slow.
And there's not a meaningful way to speed up this process at this moment.
I just want to stay on the idea that we can label these images. And I agree with you. Labeling the images is tricky because you have to decide when something is sufficiently AI-edited to append the label. And that is totally subjective. But let's say we can agree on that. Let's just set that aside. Where is this data stored? If I just take a screenshot, does it stick with the image?
It seems like we could solve all these hard problems and we still end up with a label that maybe a bad actor could just get rid of.
It's stored as far as I'm aware on their own independent database where they're trying to set up a process where you can independently check images. That's one avenue. The other avenue was supposed to be that you would access that information through the online platforms where you've already viewed the image that may or may not be fake, right? In terms of
actually manipulating that information, it's already been proved that you can do so. There are safeguards in place. I've heard that apparently if you screenshot it using certain desktop software, that metadata can still carry over. There are still systems that can recognise that what you are screenshotting using that software carries the metadata and will carry some of that over.
But then if I were to take, I don't know, my phone out of my pocket and take a picture of something on my desktop computer, yes, the quality is going to look absolutely awful by that point. But none of that metadata is going to be present. And I still have a copy of that manipulation. So at that point, all of the data is just stripped out and there's nothing you can do.
Like physical watermarks as well. They've explored that. We can remove those. Samsung's own tool.
removes that if i remember correctly it watermarks images that it makes the text to image tool that they've put on yeah samsung devices you can just use the object erase tool and remove the watermark directly in the app that made it so at the minute there isn't anything more robust it's there's a constant kind of avenue that we go down where people are finding ways around it
Adaptation is one thing. We will adapt. Things change all the time, for better or worse, and we adopt the new normal. But will that adaptation be good for everyone or even neutral? That's a much more challenging question.
We did adapt to a world of Photoshop and later Instagram filters and Facetune and so on, but the negative side effects are still playing out and deeply affecting the mental health of people everywhere. So should we now adapt to a world where every single teenage girl has to worry about her classmates making deep fake nudes of her?
Or a world in which you can't reasonably believe that any photo of a politician you see on social media hasn't been tampered with AI? As a society, we decide all the time that some problems are too complex or difficult or too politically inconvenient to solve. So we live with it. And if something doesn't change, the effects of widespread AI image editing might just become one of those problems.
We'll be right back.
The refinery at Domino is located in Williamsburg, Brooklyn, and it offers all the perks and amenities of a brand new building while being a landmark address that dates back to the mid-19th century. Its 15 floors of Class A modern office environment house within the original urban artifact, making it a unique experience for inhabitants as well as the wider community.
The building is outfitted with immersive interior gardens, a glass-domed penthouse lounge, and a world-class event space. The building is also home to a state-of-the-art equinox with a pool and spa, world-renowned restaurants, and exceptional retail. As New Yorkers return to the office, the Refinery at Domino can be more than a place to work.
It can be a magnetic hub fit to inspire your team's best ideas. Visit therefinery.nyc for a tour.
Support for this episode comes from Microsoft. Did you know one in 43 US children have had their personal information exposed or compromised? Scammers are targeting our kids online, especially on social media, where unmonitored conversations can easily lead to identity theft. We need better tools to protect our loved ones to stay ahead.
Thankfully, there's Microsoft Defender, all-in-one protection that can help keep our families safe when they're online. Microsoft Defender makes it easy to safeguard your family's data, identities, and privacy with a single security app across your devices. Take control of your family's security by helping to protect their personal info, computers, and phones from hackers and scammers.
Visit Microsoft365.com slash Defender.
This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands.
Sign up today for your $1 per month trial period at Shopify.com slash tech, all lowercase. That's Shopify.com slash tech.
We're back with Verge reporter Jess Weatherby to talk about the last big argument in the AI is just like Photoshop debate, that legislation can solve this problem. This brings me to, I think, the biggest argument, right? These are big problems that you need a lot of people to agree on a solution to actually solve. They all have to implement the same standard or the same technology.
They all have to draw the same boundaries. The way that you typically solve that problem is by passing a law. You don't count on the USB implementers forum to solve every problem every time. A standards organization can only go so far, but if you want everybody to agree to do something, you usually need the state to say, okay, this is how we're going to do it. I hear this a lot, right?
The problems we've had and the government will solve some of these problems or at least the European governments will set some standards that other people can copy, which is basically what's happening with a lot of other things right now. Is that possible that the government or a government could step in and say, OK, here's the law? No deep fakes of real people.
This is the metadata standard we're going to use and you have to display these labels with this language everywhere we go the way that we display, I don't know, nutrition labels on food.
You've got two main issues with that at the moment. One is speed. So even though Europe is pretty far ahead of this at the minute, they've already enacted laws which have been heavily scrutinized because of the second issue, which I'll get to in a moment.
But if we're talking about things like the US legal procedures, we could be waiting years for something to actually come into play that will take effect and rein in the bad actors that are using these apps for bad purposes. Because they're not inherently bad tools. You could be using them for something whimsical and absolutely innocent, but you have to separate the bad causes from the good causes.
And when you talk about things like deepfakes, you get onto the second issue, which is nuance. What do you consider a deepfake at this point? If I take a picture of myself on a Google Pixel and I use their face blurring tool, or I put my face onto my friend's body and I put that on Facebook and
Would that be counted as a deepfake for a purpose if they wanted to take me to court or any kind of legal fallout at that point? It becomes so granular of an argument that it's really difficult to put on paper what should be restricted, because at that point you're placing effectively limitations on creativity, which is difficult to do.
And you've also got to worry about things like free speech around that and all the existing laws that you have to unwind. So it's not an easy or fast process. And a couple of years in our time frame is a millennia in the development time that we've seen is happening in the AI landscape. So we have no idea what could be happening with generative AI in two years at this point.
Again, two years ago, we were seeing pretty shoddy. shoddy mashups being thrown together by these things. They weren't believable. They were interesting. They were like pretty fun to play around with, but they weren't necessarily convincing.
And now we're at the point where you could spit something out on Grok on a social media platform itself and yeah, mislead thousands of people, millions of people if you wanted to. That's a completely separate thing that's happened.
Let's talk about Grok for a second. Grok is the AI system that is on X, which is owned by Elon Musk. XAI, which is Elon Musk's AI outfit, is behind Grok. They distribute it on X. You can just log into X and you can tell Grok to do all kinds of things, including generate photos of Elon Musk.
At some point, it feels like Elon Musk and Mark Zuckerberg and Sundar Pichai and such will see enough deepfakes of themselves. Sam Altman will see some sort of horrible deepfake of himself and decide enough is enough. Is that where we have to get to? The actual purveyors of the technology feel bad effects before they reel their own creations in?
maybe but that would dishearten me considerably as there are significantly worse things happening than whatever Elon Musk could be framed to be doing right but if we think about how a lot of these applications are already being used there are a lot of celebrities particularly female celebrities that are being deep faked particularly in sexually graphic ways and that was a big deal I
I can't remember how long ago this is, my memory eludes me, but there was that big hacking incident where people actually broke into celebrities' hard drives and stole nude selfies that they'd taken of themselves. And in that circumstance, there was a lot of vitriol spewed in their direction at these celebrities for taking those pictures in the first place.
Now that can happen completely unconsensually without them having to do so. People can just make them look nude and put them into very compromising positions online. So it would be pretty, I think, grim of all these people to be ignoring the fact that is already happening. I'm pretty sure it has already happened to them as well. I haven't looked because I'm not, you know, complete degenerate.
But I think there is some argument to be made that a legal process is probably going to step in for this, that it's going to cause enough problems for
for enough people on a smaller scale, maybe like the takeout thing that I described, that it's just going to increase the use of small scale scams and fraud, catfishing on platforms that people are going to make enough of a ruckus to go, we're sick of this and we're sick of not trusting everything that we're seeing in front of us anymore.
And you need to do something about it because everyone is just too easily manipulated at this point.
The reason I ask about taxios is sometimes I think narcissism is a better regulator than anything. Not wanting the bad thing to happen to you that you control might be faster than these other processes. And I'll bring it all the way back around to Photoshop. Fake nude images of celebrities have been around since the dawn of computers, since the dawn of Photoshop.
This is one of the first things men on the internet have done with the tools they've been given. And we didn't stop them. No one ever said you shouldn't do this. This is illegal. When I see deep fakes, I say, well, this is bad because it's more accessible, because it's more damaging. You can do it to more people.
High schoolers are already doing it to their classmates, which I think is devastating and a real crisis. But when it happened to the celebrities last time, we didn't. Run around swinging those websites out of existence. It continued to happen and continues to happen to this day. Are we back at it's just the scale that makes it different and so this time we should actually do something about it?
Or am I still betting on a bunch of narcissists that will stop it before it happens to them again?
The narcissism thing could absolutely work in this context.
I think the thing that they have to fight against is the scale thing inherently, though, even if something devastating happens to them, they are in the same boat at that point that everyone else is in, which is that if you wanted to Photoshop one of these images before, you could maybe, if you were a very skilled photo editor, you could maybe do so in 20, 30 minutes.
And you would have to be very skilled to do so. If not, and you were just some regular Joe wanting to do something, again, malicious or to get money off of these weird dark web porn sites or something, you would be able to do that on a much smaller scale and much more contained way.
So if you went through photo filtering systems to find these images and get them removed from the Internet, that was a much easier process. Now, there aren't necessarily tools from Meta or something that are churning these out. Stuff definitely slips through their guardrails. But there are generative AI systems out there that have no such guardrails.
They've just been built using all the existing technology that's already available and they can spit out
thousands of these images if they want to if you want to be looking for again nude deep fakes of celebrities or god forbid really illegal stuff we're talking like children involved and all this kind of stuff there are violent and gore images and all sorts of things the sheer scale of it makes them incredibly difficult to remove to track and then
Because everything is online at this point, no one is using their real identities. How do you find these people to prosecute them? And how can you expect a platform to reasonably police that if they've proven in some kind of way in court that they have actually used all of the tools at their current disposal, which are inadequate, and we know they are inadequate, to address the situation?
So it's kind of almost like we're just going around in circles at this point. The problem is getting progressively better and the tools that we have to address it are not.
Coming through this conversation, it feels like one answer is we should just not let these tools exist. We kind of don't know how to deal with the scale of the problem that they will introduce. It's already happening. Maybe we can't stop it. Maybe it's already too late. But we could just say, OK, this was a weird run. We're just going to shut this down.
We're not going to allow these image generators to happen. There's a flip side to that, which is they have to provide some value, right? Is that balance real to you? Photoshop and image editors in general created a lot of problems, but the value they created for so many people was real. And I think we, on balance, have said, OK, well, the harms are the harms. The value is real.
And that's how you end up with it's just like Photoshop. Are we in the same kind of dynamic with generative AI image editing?
I can definitely see the pros. I'm not pooping the idea of generative AI that I think it should all be burned to the ground and scrapped forever. And I don't think that's even possible anyway, just because someone will rebuild these systems. You're going to get them cropping up, right?
If Photoshop disappeared tomorrow, there's a million other photo editing tools that can replace it if you try hard enough. The bigger issue is that even the good it's doing, which it's definitely helping efficiency and there are applications for it that are beneficial. I think the magic arrays tool that was on, well, that is on Google Pixel phones is still a fantastic thing.
There are things in the background of photos that you don't want to have there. And getting rid of that isn't something that's... you should feel guilty about or is necessarily going to cause problems in the vast majority of situations. So there are definitely use cases where they're useful and especially for like big industries and stuff.
I personally don't like the argument where it's like it boosts efficiency for creatives. Because a lot of creators will look at these things and go, but it's taking the creativity out of what I do. It's doing it for me. But if the end of the day, all you want is a marketing picture to slap on an ad and then maybe change a couple of backgrounds or something. Yeah, this is helpful.
It can do that a lot quicker than a human can. Absolutely. So there's financial benefits to it completely. Yeah.
creatively I think that's where we're going to have some issues and then that leads on to the free speech argument and everything else and it all landslides from there so I have trouble at the minute trying to find if there is an adequate balance because every intention I've seen for it that's affected me as a person has just made me want to get rid of it
Every time I'm on Pinterest and I want to find a picture of haircut examples of something, drawing references, and I look hard enough and I'm like, oh, God's sake, this is generative AI and it's all unrealistic. And I've only just noticed after looking at it for three seconds. There's not even an effective way to filter that stuff out at the minute because we can't identify these images.
So for me, it's making my online experience definitely worse. And I know that's the same for a lot of people that do just like to enjoy the Internet at most times.
Could you do the flip? I think about this all the time. It's very hard to label the AI-generated image and prove that it's fake or show what's been edited or have people read the wall of text explaining the manipulations that have been done.
Would it be easier to label the real photos and say these ones are real and they haven't been touched and the second you touch them, the flag that says they're real goes away?
I think it would be an easier process and one that people were less likely to tamper with. The Content Authenticity Initiative already does this with the exact same system it's using to try and identify generative area images, right? And if we're talking about press images, that is really useful.
So there are certain Sony cameras and I believe a Leica one as well that will automatically record that at the moment you take a picture. And if you were then to upload it to something like Getty or... There's some big media agencies as well, like I believe the New York Times has this.
They will automatically register those flags and be like, cool, we've noticed that you haven't tampered with this. We're going to publish this as a documentative image to prove that something has happened and it has been untampered with. Difficulty then comes that there aren't really online platforms doing that.
Everyone's focused on trying to identify which images are fake and not which ones are real. And then when you get to the online platform situation, you get the general public involved. And the general public is where people are going to start messing with things because... They have different goals. They might have some strange biases and want to manipulate or mislead people intentionally.
So there's, I think, an argument to be made here that the preservation of the relationship between photography and journalism is going to be really important going forward because that is a much smaller technological bridge to keep moderated.
If you are having an established relationship between people that are documenting these things with photos and then sending them to media agencies to be reported as news, that is as much smaller space to be policing whether something is real. But yeah, as soon as you get any kind of like open source thing involved, it's just going to go to mess.
All right, so we've gone through many of the big arguments in this debate. We have talked through the nuances, and we have discussed how we might try to solve some of these problems.
And I'll say, some of the things we've talked about really do echo what people have been saying since Photoshop and other digital image editing tools have existed, especially as they have evolved into the kinds of filters we see on social media. But it's the scale of AI image editing and image generation tools that really takes us to a whole new level.
And the power and the speed of the technology is far outpacing any of the safeguards that might conceivably stop it.
Like I said at the beginning, I think the reason people say it's just like Photoshop is to diminish the very real differences and very real problems posed by AI image editing, to avoid thinking about them, or to make it seem like technology is forever unstoppable, or to assume someone else will solve the problem. I don't think that's tenable, which is why we spend so much time talking about it.
Jess, thank you for coming on Decoder.
Thanks for having me. I had a great time talking about this, and I'm very curious to see where it's all going to go.
I'd like to thank Jess Weatherby for taking the time to join Decoder, and thank you for listening. I hope you enjoyed it. If you have thoughts about this episode or what you'd like to hear more of, you can email us at decoderattheverge.com. We really do read all the emails. Or you can hit me up directly on threads. I'm at reckless1280. We also have a TikTok. Check it out. It's at decoderpod.
If you like Decoder, please share it with your friends and subscribe wherever you get your podcasts. If you really like the show, hit us with that five-star review. Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Callie Wright. Our supervising producer is Liam James.
The Decoder music is by Breakmaster Cylinder. We'll see you next time. We'll be right back.