Menu
Sign In Pricing Add Podcast

Ben Zhao

Appearances

Freakonomics Radio

619. How to Poison the A.I. Machine

1027.317

Folks who are old enough remember that was a free-for-all. People could just share whatever they wanted. And of course, there were questions of legality and copyright violations and so on. But there, it was very, very different from what it is today. Those who are with the power and the money and the control are the copyright holders. So the outcome was very clear.

Freakonomics Radio

619. How to Poison the A.I. Machine

1068.221

exactly you had armies of lawyers when you consider that sort of situation and how it is now it's the complete polar opposite meaning it's the bad guys who have all the lawyers

Freakonomics Radio

619. How to Poison the A.I. Machine

1079.921

Well, I wouldn't say necessarily bad guys, but certainly the folks who in many cases are pushing profit motives that perhaps bring harm to less represented minorities who don't have the agency, who don't have the money to hire their own lawyers and who can't defend themselves.

Freakonomics Radio

619. How to Poison the A.I. Machine

1125.109

These companies are basically exploiting the fact that we know lawsuits and enforcement of new laws are going to take years. And so the idea is, let's take advantage of this time. And before these things catch up, we're already going to be established. We already are going to be essential and we already are going to be making billions.

Freakonomics Radio

619. How to Poison the A.I. Machine

1143.283

And then we'll worry about the legal costs, because really, to many of them, the legal costs and the penalties that are involved, billions of dollars is really a drop in the bucket.

Freakonomics Radio

619. How to Poison the A.I. Machine

1248.663

We will actually generate a nice looking cow with nothing particularly distracting in the background. And the cow is staring you right in the face.

Freakonomics Radio

619. How to Poison the A.I. Machine

1295.049

Glaze is all about how do we protect individual artists so that a third party does not mimic them using some local model. It's much less about these model training companies than it is about individual users who say, gosh, I like so-and-so's art, but I don't want to pay them. So in fact, what I'll do is I'll take my local copy of a model. I'll fine tune it on that artist's artwork and

Freakonomics Radio

619. How to Poison the A.I. Machine

1320.672

and then have that model try to mimic them and their style so that I can ask a model to output artistic works that look like human art from that artist, except I don't have to pay them anything.

Freakonomics Radio

619. How to Poison the A.I. Machine

1333.817

What it does is it takes images, it alters them in such a way that they basically look like they're the same, but to a particular AI model that's trying to train on this, what it sees are the visual features that actually associate it with something entirely different.

Freakonomics Radio

619. How to Poison the A.I. Machine

1350.911

For example, you can take an image of a cow eating grass in a field, and if you apply it to nightshade, perhaps that image instead teaches not so much the bovine cow features, but the features of a 1940s pickup truck.

Freakonomics Radio

619. How to Poison the A.I. Machine

1368.217

What happens then is that as that image goes into the training process, that label of this is a cow will become associated in the model that's trying to learn about what does a cow look like. It's going to read this image and in its own language, that image is going to tell it that a cow has four wheels. A cow has a big hood and a fender and a trunk.

Freakonomics Radio

619. How to Poison the A.I. Machine

1392.538

Nightshade images tend to be much more potent than usual images, so that even when they've just seen a few hundred of them, they are willing to throw away everything that they've learned from the hundreds of thousands of other images of cows and declare that its understanding has now adapted to this new understanding, that in fact cows have a shiny bumper and four wheels.

Freakonomics Radio

619. How to Poison the A.I. Machine

1415.466

Once that has happened, someone asking the model, give me a cow eating grass, the model might generate a car with a pile of hay on top.

Freakonomics Radio

619. How to Poison the A.I. Machine

1440.549

There's a couple of parameters about intensity, how strongly you want to change the image. You set the parameters, you hit go, and out comes an image that may look a little bit different. Sometimes there are tiny little artifacts that if you blow it up, you'll see.

Freakonomics Radio

619. How to Poison the A.I. Machine

1453.233

But in general, it basically looks like your old image, except with these tiny little tweaks everywhere in such a way that the AI model, when it sees it, will see something entirely different.

Freakonomics Radio

619. How to Poison the A.I. Machine

1477.574

The concept of poisoning is that you are trying to convince the model that's training on these images that something looks like something else entirely, right? So we're trying to, for example, to convince a particular model that a cow has four tires and a bumper. But in order for that to happen, you need numbers. You don't need millions of images to convince it, but you need a few hundred.

Freakonomics Radio

619. How to Poison the A.I. Machine

1499.914

And of course, the more, the merrier. And so you want everybody who uses nightshade around the world, whether they're photographers or illustration or graphic artists, you want them all to have the same effect.

Freakonomics Radio

619. How to Poison the A.I. Machine

1512.111

So whenever someone paints a picture of a cow, takes a photo of a cow, draws an illustration of a cow, draws a clip art of a cow, you want all those nice shaded effects to be consistent in their target. In order to do that, we have to take control of what the target actually is ourselves inside the software.

Freakonomics Radio

619. How to Poison the A.I. Machine

1531.862

If you gave users that level of control, then chances are people would choose very different things. Some people might say, I want my cow to be a cat. I want my cow to be the sun rising. If you were to do that, the poison would not be as strong.

Freakonomics Radio

619. How to Poison the A.I. Machine

1579.716

You probably won't see the effects of Nightshade. If you see it in the wild, models give you wrong answers to things that you're asking for. But the people who are creating these models are not foolish. They are highly trained professionals. So they're going to have lots of testing on any of these models.

Freakonomics Radio

619. How to Poison the A.I. Machine

1596.73

We would expect that effects of nightshade would actually be detected in the model training process. It'll become a nuisance. And perhaps what really will happen is that certain versions of models post-training will be detected to have certain failures inside them. And perhaps they'll have to roll them back.

Freakonomics Radio

619. How to Poison the A.I. Machine

1615.43

So I think really that's more likely to cause delays and more likely to cause costs of these model training processes to go up. The AI companies, they really have to work on millions, potentially billions of images. So it's not necessarily the fact that they can't detect nightshade on a particular image.

Freakonomics Radio

619. How to Poison the A.I. Machine

1635.429

It's the question of can they detect nightshade on a billion images in a split second with minimal cost? Because any one of those factors that goes up significantly will mean that their operation becomes much, much more expensive. And perhaps it is time to say, well, maybe we'll license artists and get them to give us legitimate images that won't have these questionable things inside them.

Freakonomics Radio

619. How to Poison the A.I. Machine

1672.26

Yeah. I mean, really, it boils down to that. I came into it not so much thinking about economics as I was just... seeing people that I respected and had affinity for be severely harmed by some of this technology. In whatever way that they can be protected, that's ultimately the goal.

Freakonomics Radio

619. How to Poison the A.I. Machine

1690.059

In that scenario, the outcome would be licensing so that they can actually maintain a livelihood and maintain the vibrancy of that industry.

Freakonomics Radio

619. How to Poison the A.I. Machine

1711.16

Yes. Yes, of course. Colleagues and former students in that space. And how do they feel about Ben Zhao? It's quite interesting, really. I go to conferences, same as I usually do, and many people resonate with what we're trying to do. We've gotten a bunch of awards and such from the community.

Freakonomics Radio

619. How to Poison the A.I. Machine

1728.49

As far as folks who are actually employed by some of these companies, some of them, I have to say, appreciate our work. They may or may not have the agency to publicly speak about it, but lots of private conversations where people are very excited. I will say that, yeah, there's been some cooling effects, burn bridges with some people. I think it really comes down to how you see your priorities.

Freakonomics Radio

619. How to Poison the A.I. Machine

1751.053

It's not so much about where employment lies, but it really is about how personally you see the value of technology versus the value of people. And oftentimes it's a very binary decision. people tend to go one way or the other rather hard. I think most of these bigger decisions, acquisitions, strategy and whatnot are largely in the hands of executives way up top.

Freakonomics Radio

619. How to Poison the A.I. Machine

1774.629

These are massive corporations and many people are very much aware of some of the stakes and perhaps might disagree with some of the technological stances that are being taken. But Everybody has to make a living. Big tech is one of the best ways to make a living. Obviously, they compensate people very well. I would say there's a lot of pressure there as well.

Freakonomics Radio

619. How to Poison the A.I. Machine

1795.822

We just had that recent news item that the young whistleblower from OpenAI just tragically passed away.

Freakonomics Radio

619. How to Poison the A.I. Machine

1824.932

Whistleblowers like that are incredibly rare because the risk that you're taking on when you publicly speak out against your former employer, that is tremendous courage. That is an unbelievable act. It's a lot to ask.

Freakonomics Radio

619. How to Poison the A.I. Machine

1902.637

Yeah, what a great question. I mean, it may not be surprising, but as a computer science professor, I actually have these kind of conversations relatively often. This past quarter, I taught many second year and third year computer science majors, and many of them came up to me in office hours and asked very similar kind of questions.

Freakonomics Radio

619. How to Poison the A.I. Machine

1921.149

They said, look, I really want to push back on some of these harms. On the other hand, look at these job opportunities. Here's this great golden ticket to the future, and what can you do? It's fascinating.

Freakonomics Radio

619. How to Poison the A.I. Machine

1932.859

I don't blame them if they'd make any particular decision, but I applaud them for even being aware of some of the issues that I think many in the media and many in Silicon Valley certainly have trouble recognizing. There is a level of ground truth underneath all this, which is that these models are limited. There is an exceptional level of hype like we've never seen before.

Freakonomics Radio

619. How to Poison the A.I. Machine

1955.185

That bubble is in many ways in the middle bursting right now. Why do you say that? There's been many papers published on the fact that these generative AI models are well at their end in terms of training data. To get better, you need something like double the amount of data that has ever been created by humanity.

Freakonomics Radio

619. How to Poison the A.I. Machine

1974.497

And you're not going to get that by buying Twitter or by licensing from Reddit or New York Times or anywhere. You've seen now recent reports about how Google and OpenAI are having trouble improving upon their models. That's common sense. They're running out of data and no amount of scraping or licensing will fix that.

Freakonomics Radio

619. How to Poison the A.I. Machine

200.424

They take decades to hone their skill. So when that's taken against their will, that is sort of identity theft.

Freakonomics Radio

619. How to Poison the A.I. Machine

2020.865

And then, of course, just the fact that there are very few legitimate revenue generating applications that will even come close to compensating for the amount of investment that VCs and these companies are pouring in. Obviously, I'm biased doing what I do, but I thought about this problem for quite some time. And honestly, these are great interpolation machines.

Freakonomics Radio

619. How to Poison the A.I. Machine

2042.289

These are great mimicry machines, but there's only so many things that you can do with them. They are not going to produce entire movies, entire TV shows, entire books to anywhere near the value that humans will actually want to consume.

Freakonomics Radio

619. How to Poison the A.I. Machine

2055.679

And so, yeah, they can disrupt and they can bring down the value of a bunch of industries, but they are not going to actually generate much revenue in and of themselves. I see that bubble bursting. And so what I say to these students oftentimes is that things will take their course and you don't need to push back actively. All you need to do is to not get swept along with the hype.

Freakonomics Radio

619. How to Poison the A.I. Machine

2075.957

When the tide turns, you will be well positioned. You will be better positioned than most to come out of it having a clear head and being able to go back to the fundamentals of why did you go to school? Why did you go to University of Chicago? And all the education that you've undergone to use your human mind because it will be shown that humans will be better than AI will ever pretend to be.

Freakonomics Radio

619. How to Poison the A.I. Machine

217.552

There is an exceptional level of hype. That bubble is in many ways in the middle of bursting right now.

Freakonomics Radio

619. How to Poison the A.I. Machine

2190.791

Art is interesting when it has intention, when there's meaning and context. So when AI tries to replace that, it has no context and meaning. Art replicated by AI, generally speaking, loses the point. It is not about automation. I think that is a mistaken analogy that people oftentimes bring up. They say, well, you know, what about the horse and buggy and the automobile?

Freakonomics Radio

619. How to Poison the A.I. Machine

2213.667

No, this is actually not about that at all. AI does not reproduce human art at a faster rate. What AI does is it takes past samples of human art, shakes it in a kaleidoscope, and gives you a mixture of what has already existed before.

Freakonomics Radio

619. How to Poison the A.I. Machine

2296.644

What's interesting about computer security is that it's not necessarily about numbers. If it's a brute force attack, I can run through all your pin numbers and it doesn't matter how ingenious they are, I will eventually come up with the right one. But for many instances, it is not about brute force and resource riches. So yeah, I am hopeful.

Freakonomics Radio

619. How to Poison the A.I. Machine

2316.693

We're looking at vulnerabilities that we consider to be fundamental in some of these models, and we're using them to slow down the machine. I don't necessarily wake up in the morning thinking, oh, yeah, I'm going to topple OpenAI or Google or anything like that. That's not necessarily the goal. I see this as more of a process in motion, this hype process.

Freakonomics Radio

619. How to Poison the A.I. Machine

2337.813

It is a storm that will eventually blow over. And how I see my role in this is not so much to necessarily stop the storm. I'm more, if you will, a giant umbrella. I'm trying to cover as many people as possible and shield them from the short-term harm.

Freakonomics Radio

619. How to Poison the A.I. Machine

2377.188

I would actually disagree, but that's OK. We can have that discussion, right? Look, you're the guy that knows stuff. I'm just asking the questions. I don't know anything about this. No, no. I think this is a great conversation to have because back in 2022 or early 2023, when I used to talk to journalists, the conversation was very, very different. The conversation was always, when is AGI coming?

Freakonomics Radio

619. How to Poison the A.I. Machine

2397.723

You know, what industries will be completely useless in a year or two? It was never the question of like, are we going to get return on investment for these billions and trillions of dollars? Are these applications going to be legit? So even in the year and a half since then, the conversation has changed materially because The truth has come out.

Freakonomics Radio

619. How to Poison the A.I. Machine

2417.428

These models are actually having trouble generating any sort of realistic value. I'm not saying that they're completely useless. There are certain scientific applications or daily applications where it is handy, but it is far, far less than what people had hoped them to be. And so, yeah, you know, how do I believe it? Part of this is hubris. I've been a professor for 20 years.

Freakonomics Radio

619. How to Poison the A.I. Machine

2438.096

I've been trained or I've been training myself to believe in myself in a way. Another answer to this question is that it really is irrelevant because the harms are happening to people in real time. And so it's not about will we eventually win or will this happen eventually in the end? It's the fact that people's lives are being affected on a daily basis.

Freakonomics Radio

619. How to Poison the A.I. Machine

2459.185

And if I can make a difference in that, then that is worthwhile in and of itself, regardless of the outcome.

Freakonomics Radio

619. How to Poison the A.I. Machine

2522.127

Very interesting. Okay, let me unpack that a little bit there. The thing that allows me to do the kind of work that I do now, I recognize is quite a privilege. The position in being a senior, tenured professor, and honestly, I don't have many of the pressures that some of my younger colleagues do.

Freakonomics Radio

619. How to Poison the A.I. Machine

2550.522

No, I mean, all of our grants are quite public. And I'm pretty sure that I'm not the most well-funded professor in the department, but I run a pretty regular lab. We write a few grants, but it's nothing earth-shaking. It's just what we turn our time towards. That's all. There's very little that drives me these days outside of just wanting my students to succeed.

Freakonomics Radio

619. How to Poison the A.I. Machine

2573.174

I don't have the pressures of needing to establish a reputation or explain to colleagues who I am and why I do what I do. So in that sense, I almost don't care. In terms of self-interest, none of these products have any... money attached to them in any way, shape or form. And I've tried very, very hard to keep it that way. There's no startup. There's no hidden profit motive or revenue here.

Freakonomics Radio

619. How to Poison the A.I. Machine

2597.864

So that simplifies things for me.

Freakonomics Radio

619. How to Poison the A.I. Machine

2608.209

No. The university always encourages entrepreneurship. They always encourage licensing, but they certainly have no control over what we do or don't do with our technology. This is sort of the reality of economics and academic research. We as a lab have a stream of PhD students that come through and we train them. They do research along the way and then they graduate and then they leave.

Freakonomics Radio

619. How to Poison the A.I. Machine

2630.11

For things like Fox, where, you know, this was the idea. Here's the tool. Here's some code. We put that out there. But ultimately, we don't expect to be maintaining that software for years to come. We just don't have the resources. That sounds like a shame if you come up with a good tool.

Freakonomics Radio

619. How to Poison the A.I. Machine

2645.635

Well, the idea behind academic research is always that if you have the good ideas and you demonstrate it, then someone else will carry it across the finish line, whether that's a startup or a research lab elsewhere. But somebody with resources who sees that need and understands it will go ahead and produce that physical tool or make that software and actually maintain it.

Freakonomics Radio

619. How to Poison the A.I. Machine

2690.828

You know, at a high level, I think that's great. I think if we get to that point, that will be a very welcome problem to have. We are in the process of exploring perhaps what a nonprofit organization would look like, because that would sort of make some of these questions transparent.

Freakonomics Radio

619. How to Poison the A.I. Machine

2708.371

Well, yeah, very different type of nonprofit, I would argue. I'm more interested in being just the first person to walk down a particular path and encouraging others to follow. So I would love it if we were not the only technology in the space. Every time I see one of these other research papers that works to protect human creatives, I applaud all that.

Freakonomics Radio

619. How to Poison the A.I. Machine

2729.032

In order for AI and human creativity to coexist in the future, they had to have a complementary relationship. And what that really means is that AI needs human work product or images or text in order to survive. So they need humans and humans really need to be compensated for this work that they're producing.

Freakonomics Radio

619. How to Poison the A.I. Machine

2748.842

Otherwise, if human artistry dies out, then AI will die out because they're going to have nothing new to learn on and they're just going to get stale and fall apart.

Freakonomics Radio

619. How to Poison the A.I. Machine

2772.819

Over the last couple of years, I've been practicing lots of fun analogies. Barbed wire is one, the large Doberman in your backyard. One particular funny one is where the hot sauce that you put on your lunch. So if that unscrupulous coworker steals your lunch repeatedly, they get a tummy ache. But wait a minute, you have to eat your lunch too. That doesn't sound very good.

Freakonomics Radio

619. How to Poison the A.I. Machine

2792.256

Well, you know, you eat the portion that you know is good and then you leave out some stuff that... Got it.

Freakonomics Radio

619. How to Poison the A.I. Machine

2809.541

Boy, that's a bit of a loaded question because honestly, we don't know. It really comes down to how these models are being used. Ultimately, I think what people want is creative content that's crafted by humans.

Freakonomics Radio

619. How to Poison the A.I. Machine

2822.825

In that sense, the fair system would be generative AI systems that stayed out of the creative domain, that continue to let human creatives do what they do best, to create really truly imaginative ideas and visuals, and then use generative AI for domains where it is more reasonable. For example, conversational chatbots seem like a reasonable use for them as long as they don't hallucinate.

Freakonomics Radio

619. How to Poison the A.I. Machine

2873.576

Certainly, I know what it's not because I'm not an artist, not particularly artistic. Some people can say there's an inkling of creativity in what we do, but it's not nearly the same. I guess what I will say is creativity is inspiring. Artists are inspiring. Whenever I think back to what I know of art and how I appreciate art, I think back to college.

Freakonomics Radio

619. How to Poison the A.I. Machine

2897.934

You know, I went to Yale, and I remember many cold Saturday mornings. I would walk out, and there's piles of snow, and everything would be super quiet, and I would take a short walk over to the Yale Art Gallery, and it was amazing. I would be able to wander through halls of masterpieces. Nobody there except me and maybe a couple of security guards.

Freakonomics Radio

619. How to Poison the A.I. Machine

2921.195

It's always been inspiring to me how people can see the world so differently through the same eyes, through the same physical mechanism. That is how I get a lot of my research done, is I try to see the world differently, and it gives me ideas.

Freakonomics Radio

619. How to Poison the A.I. Machine

293.154

We call it the SAN Lab. Which stands for? Security, algorithms, networking, and data. Most of the work that we do has been to use technology for good, to limit the harms of abuses and attacks and protect human beings and their values, whether it's personal privacy or security or data or your identity.

Freakonomics Radio

619. How to Poison the A.I. Machine

2938.423

So when I meet artists and when I talk to artists to see what they can do, to see the imagination that they have at their disposal that I see nowhere else, you know... Creativity, it's the best of humanity. What else is there?

Freakonomics Radio

619. How to Poison the A.I. Machine

320.106

It's really quite anticlimactic. We've had some TV crews come by and they're always expecting some sort of secret lair. And then they walk in, it's a bunch of cubicles. Our students all have standing desks. The only wrinkle is that I'm at one of the standing desks in the room. I don't usually sit in my office. I sit next to them a couple of cubicles over so that they don't

Freakonomics Radio

619. How to Poison the A.I. Machine

340.959

get paranoid about me watching their screen.

Freakonomics Radio

619. How to Poison the A.I. Machine

351.082

Well, there's only a handful of students in my lab to begin with. So all hands on deck is like, what, seven or eight PhD students plus us. Typically speaking, the projects are a little bit smaller just because we've got multiple projects going on. And so people are partitioning their attention and work energy at different things.

Freakonomics Radio

619. How to Poison the A.I. Machine

393.833

Adversarial machine learning is a shorthand for this interesting research area at the intersection of computer security and machine learning. anything to do with attacks, defenses, privacy concerns, surveillance, all these subtopics as related to machine learning and AI. That's what I've been working on mostly for the last decade.

Freakonomics Radio

619. How to Poison the A.I. Machine

416.05

For more than two years, we've been focused on how the misuse and abuse of these AI tools can harm real people and trying to build research tools and technology tools to try to reduce some of that harm. To protect regular citizens and, in particular, human creatives like artists and writers.

Freakonomics Radio

619. How to Poison the A.I. Machine

452.156

So that's from my D&D days. It's a fun little project. We had done prior work in ultrasonics and modulation effects when you have different microphones and how they react to different frequencies of sound. One of the effects that people have been observing is that you can make microphones vibrate in a frequency that they don't want to.

Freakonomics Radio

619. How to Poison the A.I. Machine

477.609

We figured out that we could build a set of little transducers. You can imagine a fat bracelet, sort of like cyberpunk kind of thing with, I think, 24 or 12. I forget the exact number. Little transducers that are hooked onto the bracelet like gemstones.

Freakonomics Radio

619. How to Poison the A.I. Machine

501.075

Well, hey, you got to do what you got to do, and hopefully other people will make it much smaller, right? We're not in the production business. What it does is basically it radiates a carefully tuned pair of ultrasonic pulses in such a way that commodity microphones anywhere within reach will, against their will, begin to vibrate at a normal audible frequency.

Freakonomics Radio

619. How to Poison the A.I. Machine

522.262

They basically generate the sound that's necessary to jam themselves. When we first came out with this thing, a lot of people were very excited, privacy advocates, public figures who were very concerned, not necessarily about their own Alexa, but the fact that they had to walk in to public places all the time.

Freakonomics Radio

619. How to Poison the A.I. Machine

538.806

You're really trying to prevent that hidden microphone eavesdropping on a private conversation.

Freakonomics Radio

619. How to Poison the A.I. Machine

550.909

Fox is a fun one. In 2019, I was brainstorming about some dangers that we have in the future. And this is not even generative AI. This is just sort of classification and facial recognition. One of the things that we came up with was this idea that AI is going to be everywhere and therefore anyone can train any model and therefore people can basically train models of you.

Freakonomics Radio

619. How to Poison the A.I. Machine

572.569

At the time, it was not about deep fakes. It was about surveillance. And what would happen if people just went online, took your entire internet footprint, which of course today is massive, scrape all your photos from Facebook and Instagram and LinkedIn, and then build this incredibly accurate facial recognition model of you without your knowledge, much less permission.

Freakonomics Radio

619. How to Poison the A.I. Machine

592.22

And we built this tool that basically allows you to alter your selfies, your photos, in such a way that it made you look more like someone else than yourself.

Freakonomics Radio

619. How to Poison the A.I. Machine

609.031

Only in the version when it's being used to build a model against you. But the funny part was that we built this technology, we wrote the paper, and on the week of submission, this was 2020, we were getting ready to submit that paper. I remember it distinctly. That was when Cashmere Hill at the New York Times came out with her story on Clearview AI.

Freakonomics Radio

619. How to Poison the A.I. Machine

630.029

And that was just mind-blowing because I had been talking to our students for months about having to build for this dark scenario. And literally, here's the New York Times saying, yeah, this is today and we are already in it. That was disturbing on many fronts, but it did make writing the paper a lot easier. We just cited the New York Times article and said, here it is already.

Freakonomics Radio

619. How to Poison the A.I. Machine

649.887

Clearview AI is funded how? It was a private company. I think it's still private now. It's gone through some ups and downs. Since the New York Times article, they've had to change their revenue stream. They no longer take third-party customers. Now they only work with government and law enforcement.

Freakonomics Radio

619. How to Poison the A.I. Machine

675.448

Fox was designed as a research paper, an algorithm, but we did produce a little app. I think it went over a million downloads. We stopped keeping track of it, but we still have a mailing list, and that mailing list is actually how some artists reach out.

Freakonomics Radio

619. How to Poison the A.I. Machine

759.182

These companies will go out and they'll run scrapers, little tools that go online and basically suck up any semblance of imagery, especially high quality imagery from online websites.

Freakonomics Radio

619. How to Poison the A.I. Machine

784.7

It would download those images and run them through an image classifier to generate some set of labels and then take that pair of images and their labels and then feed that into the pipeline to some text image model.

Freakonomics Radio

619. How to Poison the A.I. Machine

806.753

How meaningful is that? Well, opting out assumes a lot of things. It assumes benign acquiescence from the technology makers. Benign acquiescence, meaning they have to actually do what they say they're going to do? Yeah, exactly. Opting out is toothless because you can't prove it in the machine learning business.

Freakonomics Radio

619. How to Poison the A.I. Machine

824.968

Even if someone completely went against their word and said, okay, here's my opt-out list, and then immediately train on all their content, you just lack the technology to prove it. And so what's to stop someone from basically going back on their word when we're talking about billions of dollars at stake? Really, you're hoping and praying someone's being nice to you.

Freakonomics Radio

619. How to Poison the A.I. Machine

857.684

A big part of their misuse is when they assume the identity of others. So this idea of right of publicity and the idea that we own our faces, our voices, our identity, our skills and work product, that is very much a core of how we define ourselves. For artists, it's the fact that they take decades to hone their skill and to become known for a particular style.

Freakonomics Radio

619. How to Poison the A.I. Machine

881.136

So when that's taken against their will without their permission, that is a type of identity theft, if you will.

Freakonomics Radio

619. How to Poison the A.I. Machine

892.489

Right now, many of these models are being used to replace human creatives. If you look at some of the movie studios, the gaming studios, or publishing houses, artists and teams of artists are being laid off.

Freakonomics Radio

619. How to Poison the A.I. Machine

904.435

One or two remaining artists are being told, here, you have a budget, here's mid-journey, I want you to use your artistic vision and skill to basically craft these AI images to replace the work product of the entire team who's now been laid off.

Freakonomics Radio

619. How to Poison the A.I. Machine

925.724

Poison is sort of a technical term in the research community. Basically, it means manipulating training data in such a way to get AI models to do something perhaps unexpected, perhaps more to your goals than the original trainers intended to.

Freakonomics Radio

619. How to Poison the A.I. Machine

946.812

Glaze is all about making it harder to target and mimic individual artists. Nightshade is a little bit more far-reaching. Its goal is primarily to make training on internet scrape data more expensive than it is now.

Freakonomics Radio

619. How to Poison the A.I. Machine

963.835

perhaps more expensive than actually licensing legitimate data, which ultimately is our hope that this would push some of these AI companies to seek out legitimate licensing deals with artists so that they can properly be compensated.

Freakonomics Radio

619. How to Poison the A.I. Machine

986.094

We're talking about companies and stakeholders who have trillions in market cap the richest companies on the planet by definition. So that completely changes the game.

Freakonomics Radio

619. How to Poison the A.I. Machine

996.942

It means that when they want things to go a certain way, whether it's lobbyists on Capitol Hill, whether it's media control and inundating journalists and running ginormous national expos and trade shows of whatever they want, nothing is off limits. That completely changes the power dynamics of what you're talking about. The closest analogy I can draw on is in the early 2000s, we had music piracy.

Global News Podcast

Syrian government says operation against Assad loyalists over

1269.057

Sound check, technical check.

Global News Podcast

Syrian government says operation against Assad loyalists over

1281.705

About two hours ago, Bybit experienced a hack. As far as we know, this could be the largest hack in the history of our industry.