Menu
Sign In Pricing Add Podcast
Podcast Image

Lex Fridman Podcast

#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

Mon, 18 Mar 2024

Description

Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free Transcript: https://lexfridman.com/sam-altman-2-transcript EPISODE LINKS: Sam's X: https://x.com/sama Sam's Blog: https://blog.samaltman.com/ OpenAI's X: https://x.com/OpenAI OpenAI's Website: https://openai.com ChatGPT Website: https://chat.openai.com/ Sora Website: https://openai.com/sora GPT-4 Website: https://openai.com/research/gpt-4 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:51) - OpenAI board saga (25:17) - Ilya Sutskever (31:26) - Elon Musk lawsuit (41:18) - Sora (51:09) - GPT-4 (1:02:18) - Memory & privacy (1:09:22) - Q* (1:12:58) - GPT-5 (1:16:13) - $7 trillion of compute (1:24:22) - Google and Gemini (1:35:26) - Leap to GPT-5 (1:39:10) - AGI (1:57:44) - Aliens

Audio
Featured in this Episode
Transcription

0.129 - 25.23 Lex Fridman

The following is a conversation with Sam Altman, his second time in the podcast. He is the CEO of OpenAI, the company behind GPT-4, Chad GPT, Sora, and perhaps one day, the very company that will build AGI. And now a quick few second mention of each sponsor. Check them out in the description. It's the best way to support this podcast.

0
💬 0

25.65 - 40.837 Lex Fridman

We got a new sponsor, Cloaked, for protecting your personal information. Shopify for selling stuff online. BetterHelp for helping out your mind. And ExpressVPN for protecting your privacy and security.

0
💬 0

41.757 - 64.595 Lex Fridman

on the interwebs choose wisely my friends also if you want to work with our amazing team we're always hiring or if you just want to get in touch with me uh go to lexfriedman.com contact and now on to the full ad reads as always no ads in the middle i try to make this interesting but if you must skip them friends please do check out our sponsors i enjoy their stuff maybe you will too

0
💬 0

66.343 - 88.174 Lex Fridman

This episode is brought to you by Cloaked, a sponsor I didn't know existed until quite recently, and always thought a thing like this should exist, and I couldn't quite find a thing like it that existed, and once I found it, it was pretty awesome. It's a platform that lets you generate new email addresses and phone numbers every time you sign up for a website.

0
💬 0

88.614 - 105.864 Lex Fridman

So it's called a masked email, which basically creates, I guess you could say it's a fake email that hides your actual email, but it's It's not fake in that it actually exists and persists throughout time, and the website thinks it's real. It just forwards to your actual email. You can set up the forwarding.

0
💬 0

106.484 - 127.312 Lex Fridman

The point is the website or service that you sign up for doesn't know your actual phone number. It doesn't know your actual email. So this is a really interesting idea because... when you sign up to different websites, there's a kind of contract, unspoken contract that the email you provide and the phone number you provide will not be abused.

0
💬 0

127.972 - 147.82 Lex Fridman

For the kind of abuse I'm talking about, in sort of the best case, just spammed, or in the worst case, that email or phone number being sold out there, and then you get not just spam from one source, but spam from all of the sources all over the place. Anyway, this is just a smart thing to protect yourself. And it also does basic password manager stuff.

0
💬 0

148.141 - 176.551 Lex Fridman

So you can think of Cloaked as a great password manager with extra privacy superpowers. You can go to cloaked.com slash Lex to get 14 days free or for a limited time, use code LexPod when signing up to get 25% off an annual Cloaked plan. This episode is also brought to you by Shopify, a platform designed for anyone, yes, anyone including me, to sell anywhere with a great looking online store.

0
💬 0

177.071 - 205.471 Lex Fridman

I used it to sell some t-shirts at lexfruman.com slash store. You can check it out. I used the most basic store. It took just a few minutes and the store was up. from the shirt design being finished to the store being alive, and being able to sell T shirts and ship those T shirts thanks to the integration with a third party, which there's thousands of integrations with a third party.

0
💬 0

207.453 - 223.698 Lex Fridman

So for T shirts, that's like on demand printing, so you don't have to take care of the shipping and the printing and all that kind of stuff. All of that is integrated, super easy to do, and this works for any kind of business that sells stuff online. You can integrate into your own website or you can sell it on Shopify itself, which is what I do.

0
💬 0

224.438 - 250.306 Lex Fridman

You can sign up for a $1 per month trial period at shopify.com slash lex, all lowercase. Go to shopify.com slash lex to take your business to the next level today. This episode is also brought to you by BetterHelp, spelled H-E-L-P. Help, they figure out what you need and match you with a licensed therapist in under 48 hours. Works for individuals, works for couples.

0
💬 0

251.287 - 272.45 Lex Fridman

I'm a huge fan of talking as a way of exploring the human mind. Two people talking with a motivation and a goal in mind. of surfacing certain kinds of problems and alleviating those kinds of problems. Sometimes the surfacing in itself does a lot of the alleviation.

0
💬 0

273.571 - 299.874 Lex Fridman

Returning to a time in the past when trauma happened and to reframe it in a way that helps you understand, that helps you forgive, that helps you let go. all of that. It's really powerful. And BetterHelp just is an accessible way of doing that, or at least trying talk therapy. So they've helped a lot of people. 4.4 million people got help. So you can be one of those.

0
💬 0

300.455 - 325.965 Lex Fridman

If you want to try, check them out at betterhelp.com slash Lex and save in your first month. That's betterhelp.com slash Lex. This episode is also brought to you by ExpressVPN. I love that there's a kind of privacy theme to the sponsors in this episode. I think everybody should be using a VPN for many reasons. One, it can allow you to geographically

0
💬 0

326.805 - 346.802 Lex Fridman

transport yourself, but the main reason is it just adds this extra layer of security and privacy between you and the ISP that they say you're technically not supposed to be collecting the data when you use things like Chrome and Cognito, but they can be collecting the data. I don't know how the laws of that works, but I wouldn't trust it. So a VPN is essential.

0
💬 0

347.402 - 368.526 Lex Fridman

For that, my favorite VPN for many, many, many, many, many, many years has been ExpressVPN. Big sexy button still works. It looks different, but still works on any operating system. My favorite being Linux. I can talk forever about why I love Linux. I wonder if Linux will be around with all this AI, with all this rapid AI development.

0
💬 0

370.157 - 392.7 Lex Fridman

Maybe programmers, programming as a way of life, as a recreation for millions, as a profession for millions, will die out and there'll only be a handful, a few, like the cobalt programmers of today that carry the flag of knowing what Linux is, how to spell Linux, let alone use it, I wonder.

0
💬 0

393.953 - 417.882 Lex Fridman

Hopefully not, because there's always room for optimizing at every level, the compilation from the human language to the AI language to the machine language to the zeros and ones, the compilation of the entire stack. I think there's a lot of jobs to be had, a lot of really... profitable, well-paying jobs to be had there, but maybe not millions of people are needed.

0
💬 0

418.443 - 437.443 Lex Fridman

Maybe there'll be millions of people that program with just natural language, with just words, English, or whatever new language we create that the whole world can use. And the whole world, in using, can help break down the barriers of language. We arrived here, friends, when we started at the meager explanation of the use of a VPN.

0
💬 0

438.483 - 478.976 Lex Fridman

You can also take this journey by going to expressvpn.com for an extra three months free. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Sam Altman. Take me through the OpenAI board saga that started on Thursday, November 16th, maybe Friday, November 17th for you.

0
💬 0

479.757 - 483.238 Lex Fridman

That was definitely the most painful professional experience of my life.

0
💬 0

484.619 - 495.064 Lex Fridman

And chaotic and shameful and upsetting and a bunch of other negative things.

0
💬 0

496.544 - 500.326 Sam Altman

There were great things about it too, and I wish it had not been

0
💬 0

502.392 - 528.525 Sam Altman

in such an adrenaline rush that i wasn't able to stop and appreciate them at the time but um i came across this old tweet of mine or this tweet of mine from that time period which was like it was like you know kind of going to your own eulogy watching people say all these great things about you and uh just like unbelievable support from people i love and care about uh that was really nice.

0
💬 0

529.226 - 549.716 Lex Fridman

Um, that whole weekend, I kind of like felt with one big exception, I felt like a great deal of love and very little hate. Um, even though it felt like I have no idea what's happening and what's going to happen here, and this feels really bad.

0
💬 0

549.736 - 566.152 Sam Altman

There were definitely times I thought it was going to be one of the worst things to ever happen for AI safety. I also think I'm happy that it happened relatively early. I thought at some point between when OpenAI started and when we created AGI,

0
💬 0

567.453 - 587.524 Lex Fridman

There was going to be something crazy and explosive that happened, but there may be more crazy and explosive things still to happen. It still, I think, helped us build up some resilience and be ready for more challenges in the future.

0
💬 0

588.325 - 594.551 Lex Fridman

But the thing you had a sense that you would experience is some kind of power struggle.

0
💬 0

594.811 - 602.819 Sam Altman

The road to AGI should be a giant power struggle. Like, the world should... Well, not should. I expect that to be the case.

0
💬 0

603.439 - 624.484 Lex Fridman

And so you have to go through that, like you said, iterate as often as possible, figuring out how to have a board structure, how to have organization, how to have the kind of people that you're working with, how to communicate, all that, in order to de-escalate the power struggle as much as possible. Pacify it.

0
💬 0

624.764 - 654.116 Sam Altman

But at this point, it feels... You know, like something that was in the past that was really unpleasant and really difficult and painful. But we're back to work and things are so busy and so intense that I don't spend a lot of time thinking about it. There was a time after. There was like this fugue state for kind of like the month after, maybe 45 days after.

0
💬 0

655.976 - 683.295 Sam Altman

I was just sort of like drifting through the days. I was so out of it. I was feeling so down. Just on a personal psychological level. Yeah. Really painful. And hard to have to keep running OpenAI in the middle of that. I just wanted to crawl into a cave and kind of recover for a while. But now it's like we're just back to working on the mission.

0
💬 0

683.315 - 713.483 Lex Fridman

Well, it's still useful to go back there and reflect on board structures, on... power dynamics on how companies are run, the tension between research and product development and money and all this kind of stuff so that you, who have a very high potential of building AGI, would do so in a slightly more organized, less dramatic way in the future.

0
💬 0

713.684 - 725.657 Lex Fridman

So there's value there to go both the personal psychological aspects of you as a leader and also just the board structure and all this kind of messy stuff. Definitely learned a lot about...

0
💬 0

728.169 - 755.924 Sam Altman

structure and incentives and what we need out of a board. And I think it is valuable that this happened now in some sense. I think this is probably not the last high-stress moment of OpenAI, but it was quite a high-stress moment. My company very nearly got destroyed. And we think a lot about many of the other things we've got to get right for AGI.

0
💬 0

756.465 - 767.076 Sam Altman

But thinking about how to build a resilient org and how to build a structure that will stand up to a lot of pressure in the world, which I expect more and more as we get closer, I think that's super important.

0
💬 0

767.837 - 784.611 Lex Fridman

Do you have a sense of how deep and rigorous the deliberation process by the board was? Can you shine some light? on just human dynamics involved in situations like this? Was it just a few conversations and all of a sudden it escalates and why don't we fire Sam kind of thing?

0
💬 0

785.552 - 794.218 Sam Altman

I think, I think the board members were, are well meaning people on the whole.

0
💬 0

795.258 - 817.419 Lex Fridman

Um, and I believe that in stressful situations, um, where people feel time pressure or whatever, people understandably make suboptimal decisions. And I think one of the challenges for

0
💬 0

818.786 - 843.104 Sam Altman

OpenAI will be – we're going to have to have a board and a team that are good at operating under pressure. Do you think the board had too much power? I think boards are supposed to have a lot of power. But one of the things that we did see is in most corporate structures, boards are usually answerable to shareholders. Sometimes people have like super voting shares or whatever.

0
💬 0

844.625 - 862.857 Sam Altman

In this case, and I think one of the things with our structure that we maybe should have thought about more than we did, is that the board of a nonprofit has, unless you put other rules in place, quite a lot of power. They don't really answer to anyone but themselves. And there's ways in which that's good.

0
💬 0

862.937 - 869.361 Sam Altman

But what we'd really like is for the board of OpenAI to answer to the world as a whole as much as that's a practical thing.

0
💬 0

870.241 - 872.423 Lex Fridman

So there's a new board announced?

0
💬 0

872.823 - 872.923 Unknown

Yeah.

0
💬 0

873.772 - 891.386 Lex Fridman

There's, I guess, a new smaller board at first, and now there's a new final board. Not a final board yet. We've added some, we'll add more. Added some, okay. What is fixed in the new one that was perhaps broken? In the previous one?

0
💬 0

892.067 - 916.24 Sam Altman

The old board sort of got smaller over the course of about a year. It was nine and then it went down to six. And then we couldn't agree on who to add. And the board also, I think, didn't have a lot of experienced board members. And a lot of the new board members at Open AI just have more experience as board members. I think that'll help.

0
💬 0

917.495 - 928.664 Lex Fridman

It's been criticized some of the people that are added to the board. I heard a lot of people criticizing the addition of Larry Summers, for example. What's the process of selecting the board like? What's involved in that?

0
💬 0

929.225 - 958.537 Sam Altman

So Brett and Larry were kind of decided in the heat of the moment over this very tense weekend. And that weekend was like a real roller coaster. It was like a lot of ups and downs. And we were trying to agree on... new board members that both sort of the executive team here and the old board members felt would be reasonable. Larry was actually one of their suggestions, the old board members.

0
💬 0

960.018 - 970.605 Sam Altman

Brett, I think I had even previous to that weekend suggested, but he was busy and didn't want to do it. And then we really needed help and would. We talked about a lot of other people too, but that was

0
💬 0

973.294 - 979.157 Lex Fridman

I felt like if I was going to come back, I needed new board members.

0
💬 0

981.418 - 1002.771 Sam Altman

I didn't think I could work with the old board again in the same configuration, although we then decided, and I'm grateful that Adam would stay, but we wanted to get to, we considered various configurations, decided we wanted to get to a board of three. And had to find two new board members over the course of sort of a short period of time.

0
💬 0

1003.651 - 1024.95 Sam Altman

So those were decided honestly without, you know, that's like you kind of do that on the battlefield. You don't have time to design a rigorous process then. For new board members since... new board members will add going forward. We have some criteria that we think are important for the board to have, different expertise that we want the board to have.

0
💬 0

1025.771 - 1046.532 Sam Altman

Unlike hiring an executive where you need them to do one role well, the board needs to do a whole role of kind of governance and thoughtfulness well. And so one thing that Brett says, which I really like, is that we want to hire board members in slates, not as individuals one at a time. And thinking about a group of people that will bring

0
💬 0

1047.958 - 1054.744 Sam Altman

Nonprofit expertise, expertise in running companies, sort of good legal and governance expertise. That's kind of what we've tried to optimize for.

0
💬 0

1054.764 - 1058.087 Lex Fridman

So it's technical savvy important for the individual board members?

0
💬 0

1058.227 - 1062.31 Sam Altman

Not for every board member, but for certainly some you need that. That's part of what the board needs to do.

0
💬 0

1062.631 - 1074.521 Lex Fridman

So, I mean, the interesting thing that people probably don't understand about OpenAI, I certainly don't, is like all the details of running the business. When they think about the board, given the drama, they think about you, they think about like...

0
💬 0

1076.331 - 1091.162 Lex Fridman

if you reach AGI or you reach some of these incredibly impactful products and you build them and deploy them, what's the conversation with the board like? And they kind of think, all right, what's the right squad to have in that kind of situation to deliberate?

0
💬 0

1091.983 - 1110.717 Sam Altman

Look, I think you definitely need some technical experts there. And then you need some people who are like, how can we deploy this in a way that will help people in the world the most and people who have a very different perspective? You know, I think a mistake that you or I might make is to think that only the technical understanding matters.

0
💬 0

1111.698 - 1127.931 Sam Altman

And that's definitely part of the conversation you want that board to have. But there's a lot more about how that's going to just like impact society and people's lives that you really want represented in there too. And you're just kind of, are you looking at the track record of people or you're just having conversations? Track record is a big deal.

0
💬 0

1128.571 - 1153.835 Sam Altman

You of course have a lot of conversations, but I, um, There's some roles where I totally ignore track record and just look at slope, ignore the Y-intercept. Thank you. Thank you for making it mathematical for the audience. For a board member, I do care much more about the Y-intercept. I think there is something deep to say about track record there.

0
💬 0

1154.915 - 1157.657 Sam Altman

and experience is sometimes very hard to replace.

0
💬 0

1158.698 - 1179.015 Lex Fridman

Do you try to fit a polynomial function or exponential one to the track record? That's not that, an analogy doesn't carry that far. All right. You mentioned some of the low points that weekend. What were some of the low points psychologically for you? Did you consider going to the Amazon jungle and just taking ayahuasca and disappearing forever?

0
💬 0

1180.016 - 1200.735 Sam Altman

I mean, there's so many low, like it was very bad. There were great high points, too. Like my phone was just like sort of nonstop blowing up with nice messages from people I work with every day, people I hadn't talked to in a decade. I didn't get to appreciate that as much as I should have because I was just like in the middle of this firefight. But that was really nice.

0
💬 0

1200.875 - 1204.797 Lex Fridman

But on the whole, it was like a very painful weekend and also just like a very.

0
💬 0

1208.464 - 1231.213 Sam Altman

It was like a battle fought in public to a surprising degree, and that was extremely exhausting to me, much more than I expected. I think fights are generally exhausting, but this one really was. You know, the board did this... Friday afternoon, I really couldn't get much in the way of answers, but I also was just like, well, the board gets to do this.

0
💬 0

1231.394 - 1252.696 Sam Altman

And so I'm going to think for a little bit about what I want to do, but I'll try to find the blessing in disguise here. And I was like, well, I... You know, my current job at OpenAI is or it was like to like run a decently sized company at this point. And the thing I'd always liked the most was just getting to like work on work with the researchers.

0
💬 0

1253.237 - 1269.091 Sam Altman

And I was like, yeah, I can just go do like a very focused AGI research effort. And I got excited about that. Didn't even occur to me at the time to like possibly that this was all going to get undone. This was like Friday afternoon. So you've accepted your, the death of this previous. Very quickly.

0
💬 0

1269.291 - 1292.514 Sam Altman

Like within, you know, I mean, I went through like a little period of confusion and rage, but very quickly. And by Friday night, I was like talking to people about what was going to be next. And I was excited about that. Um, I think it was Friday night evening for the first time that I heard from the exec team here, which was like, hey, we're going to fight this and we think, whatever.

0
💬 0

1293.394 - 1315.651 Sam Altman

And then I went to bed just still being like, okay, excited, like onward. Were you able to sleep? Not a lot. It was one of the weird things was there was this period of four and a half days where sort of didn't sleep much, didn't eat much, and still kind of had like a surprising amount of energy. You learn like a weird thing about adrenaline in wartime.

0
💬 0

1316.012 - 1319.152 Lex Fridman

So you kind of accepted the death of, you know, this baby opening up.

0
💬 0

1319.172 - 1323.054 Sam Altman

And I was excited for the new thing. I was just like, okay, this was crazy, but whatever.

0
💬 0

1323.074 - 1324.274 Lex Fridman

It's a very good coping mechanism.

0
💬 0

1324.574 - 1334.177 Sam Altman

And then Saturday morning, two of the board members called and said, hey, we, you know, destabilize, we didn't mean to destabilize things. We don't want to store a lot of value here. You know, can we talk about you coming back?

0
💬 0

1335.297 - 1357.015 Sam Altman

And I immediately didn't want to do that, but I thought a little more and I was like, well, I don't really care about the people here, the partners, shareholders, like all of the, I love this company. And so I thought about it and I was like, well, okay, but like, here's, here's the stuff I would need. And, and then the most painful time of all was over the course of that weekend, um,

0
💬 0

1358.735 - 1377.269 Sam Altman

I kept thinking and being told, and we all kept, not just me, like the whole team here kept thinking, well, we were trying to like keep OpenAI stabilized while the whole world was trying to break it apart, people trying to recruit, whatever. We kept being told like, all right, we're almost done. We're almost done. We just need like a little bit more time. And it was this like very confusing state.

0
💬 0

1377.589 - 1403.373 Sam Altman

And then Sunday evening, when again, like every few hours, I expected that we were going to be done and we're going to like figure out a way for me to return and Things go back to how they were. The board then appointed a new interim CEO. And then I was like, I mean, that feels really bad. That was the low point of the whole thing. You know, I'll tell you something.

0
💬 0

1403.573 - 1429.5 Sam Altman

It felt very painful, but I felt a lot of love that whole weekend. It was not other than that one moment, Sunday night, I would not characterize my emotions as anger or hate. But I really just like, I felt a lot of love from people towards people. It was painful, but the dominant emotion of the weekend was love, not hate.

0
💬 0

1430.22 - 1441.763 Lex Fridman

You've spoken highly of Meera Muradi, that she helped especially, as you put in the tweet, in the quiet moments when it counts. Perhaps we could take a bit of a tangent. What do you admire about Meera?

0
💬 0

1442.083 - 1470.786 Sam Altman

Well, she did a great job during that weekend in a lot of chaos. But people often see leaders in the crisis moments, good or bad. But a thing I really value in leaders is how people act on a boring Tuesday at 9.46 in the morning and in just sort of the normal drudgery of the day-to-day. How someone shows up in a meeting, the quality of the decisions they make.

0
💬 0

1471.807 - 1484.496 Lex Fridman

That was what I meant about the quiet moments. Meaning like most of the work is done on a day-by-day in a meeting-by-meeting, just a Be present and make great decisions.

0
💬 0

1484.636 - 1491.26 Sam Altman

Yeah, I mean, look, what you have wanted to spend the last 20 minutes about, and I understand, is like this one very dramatic weekend.

0
💬 0

1491.48 - 1491.66 Lex Fridman

Yeah.

0
💬 0

1492.601 - 1496.123 Sam Altman

But that's not really what Opening Eye is about. Opening Eye is really about the other seven years.

0
💬 0

1496.563 - 1518.366 Lex Fridman

Well, yeah, human civilization is not about... the invasion of the soviet union by nazi germany but still that's something people focus on very very understandable it gives us an insight into human nature the extremes of human nature and perhaps some of the damage and some of the triumphs of human civilization can happen in those moments so it's like illustrative let me ask you about ilia

0
💬 0

1519.941 - 1542.487 Lex Fridman

Is he being held hostage in a secret nuclear facility? No. What about a regular secret facility? No. What about a nuclear non-secret facility? Neither of them. Not that either. I mean, this is becoming a meme at some point. You've known Ilya for a long time. He was obviously in part of this drama with the board and all that kind of stuff. What's your relationship with him now?

0
💬 0

1543.267 - 1560.15 Sam Altman

I love Ilya. I have tremendous respect for Ilya. I... I don't have anything I can say about his plans right now. That's a question for him. But I really hope we work together for certainly the rest of my career. He's a little bit younger than me. Maybe he works a little bit longer.

0
💬 0

1560.17 - 1572.277 Lex Fridman

You know, there's a meme that he saw something. Like he maybe saw AGI and that gave him a lot of worry internally. What did Ilya see?

0
💬 0

1574.369 - 1603.824 Sam Altman

Oh, he has not seen AGI. None of us have seen AGI. We have not built AGI. I do think one of the many things that I really love about Ilya is he takes AGI and the safety concerns, broadly speaking, including things like the impact this is going to have on society, very seriously. As we continue to make significant progress,

0
💬 0

1605.005 - 1616.253 Sam Altman

Ilya is one of the people that I've spent the most time over the last couple of years talking about what this is going to mean, what we need to do to ensure we get it right, to ensure that we succeed at the mission.

0
💬 0

1617.894 - 1635.876 Lex Fridman

So Ilya did not see AGI, but Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right.

0
💬 0

1636.596 - 1656.205 Lex Fridman

I've had a bunch of conversations with him in the past. I think when he talks about technology, he's always doing this long-term thinking type of thing. So he's not thinking about what this is going to be in a year, he's thinking about in 10 years. Just thinking from first principles, like, okay... if the scales, what are the fundamentals here? Where's this going?

0
💬 0

1656.685 - 1675.133 Lex Fridman

And so that, that's a foundation for them thinking about like all the other safety concerns and all that kind of stuff. Um, which makes him a really fascinating human, uh, to talk with. Do you have a, any idea why he's been kind of quiet? Is it, he's just doing some soul searching again?

0
💬 0

1675.153 - 1692.834 Sam Altman

I don't want to like speak for, Oh yeah. I think that you should ask him that. Um, He's definitely a thoughtful guy. I kind of think Avelio is always on a soul search in a really good way.

0
💬 0

1693.155 - 1703.38 Lex Fridman

Yes, yeah. Also, he appreciates the power of silence. Also, I'm told he can be a silly guy, which I've never seen that side of him. It's very sweet when that happens.

0
💬 0

1705.769 - 1720.425 Sam Altman

I've never witnessed a silly Ilya, but I look forward to that as well. I was at a dinner party with him recently, and he was playing with a puppy. And he was in a very silly mood, very endearing. And I was thinking, oh man, this is not the side of Ilya that the world sees the most.

0
💬 0

1721.761 - 1729.844 Lex Fridman

So just to wrap up this whole saga, are you feeling good about the board structure, about all of this, and where it's moving?

0
💬 0

1729.864 - 1753.472 Sam Altman

I feel great about the new board. In terms of the structure of OpenAI, one of the board's tasks is to look at that and see where we can make it more robust. We wanted to get new board members in place first, but we clearly learned a lesson about structure throughout this process. I don't have, I think, super deep things to say. It was a crazy, very painful experience.

0
💬 0

1753.532 - 1763.34 Sam Altman

I think it was like a perfect storm of weirdness. It was like a preview for me of what's going to happen as the stakes get higher and higher and the need that we have like robust governance structures and processes and people.

0
💬 0

1766.082 - 1768.804 Lex Fridman

I am kind of happy it happened when it did, but it was...

0
💬 0

1771.297 - 1776.681 Lex Fridman

a shockingly painful thing to go through. Did it make you be more hesitant in trusting people?

0
💬 0

1776.981 - 1799.017 Sam Altman

Yes. Just on a personal level? Yes. I think I'm like an extremely trusting person. I always had a life philosophy of, you know, like, don't worry about all of the paranoia. Don't worry about the edge cases. You know, you get a little bit screwed in exchange for getting to live with your guard down. And this was so shocking to me. I was so caught off guard. that it has definitely changed.

0
💬 0

1800.178 - 1814.005 Sam Altman

And I really don't like this. It's definitely changed how I think about just like default trust of people and planning for the bad scenarios. You got to be careful with that. Are you worried about becoming a little too cynical? I'm not worried about becoming too cynical.

0
💬 0

1814.225 - 1821.129 Sam Altman

I think I'm like the extreme opposite of a cynical person, but I'm worried about just becoming like less of a default trusting person.

0
💬 0

1822.076 - 1846.063 Lex Fridman

I'm actually not sure which mode is best to operate in for a person who's developing AGI. Trusting or untrusting. It's an interesting journey you're on. But in terms of structure, see, I'm more interested on the human level. Like, how do you surround yourself with humans that are building cool shit, but also are making wise decisions?

0
💬 0

1846.083 - 1850.684 Lex Fridman

Because the more money you start making, the more power the thing has, the weirder people get.

0
💬 0

1852.1 - 1876.131 Sam Altman

I think you could make all kinds of comments about the board members and the level of trust I should have had there or how I should have done things differently. But in terms of the team here, I think you'd have to give me a very good grade on that one. And I have just enormous gratitude and trust and respect for the people that I work with every day.

0
💬 0

1876.191 - 1878.992 Sam Altman

And I think being surrounded with people like that is...

0
💬 0

1881.141 - 1897.281 Lex Fridman

It's really important. Our mutual friend, Elon, sued OpenAI. What is the essence of what he's criticizing? To what degree does he have a point? To what degree is he wrong?

0
💬 0

1898.516 - 1916.27 Sam Altman

I don't know what it's really about. We started off just thinking we were going to be a research lab and having no idea about how this technology was going to go. Because it was only seven or eight years ago, it's hard to go back and really remember what it was like then. But this was before language models were a big deal.

0
💬 0

1916.31 - 1928.775 Sam Altman

This was before we had any idea about an API or selling access to a chatbot. This was before we had any idea we were going to productize at all. So we're like, we're just going to try to do research and we don't really know what we're going to do with that.

0
💬 0

1929.436 - 1936.759 Sam Altman

I think with many fundamentally new things, you start fumbling through the dark and you make some assumptions, most of which turn out to be wrong.

0
💬 0

1937.879 - 1949.25 Lex Fridman

And then it became clear that we were going to need to do different things and also have huge amounts more capital.

0
💬 0

1950.031 - 1967.054 Sam Altman

So we said, okay, well, the structure doesn't quite work for that. How do we patch the structure? And then you patch it again and patch it again, and you end up with something that does look kind of eyebrow-raising to say the least. But we got here gradually with, I think, reasonable decisions at each point along the way.

0
💬 0

1967.834 - 1977.492 Sam Altman

And it doesn't mean I wouldn't do it totally differently if we could go back now with an oracle, but you don't get the oracle at the time. But anyway, in terms of what Elon's real motivations here are, I don't know.

0
💬 0

1979.034 - 1989.206 Lex Fridman

To the degree you remember, what was the response that OpenAI gave in the blog post? Can you summarize it? Oh, we just said like,

0
💬 0

1991.353 - 2009.402 Sam Altman

Elon said this set of things. Here's our characterization. Here's the characterization of how this went down. We tried to not make it emotional and just sort of say, here's the history.

0
💬 0

2010.343 - 2034.381 Lex Fridman

I do think... there's a degree of mischaracterization from Elon here about one of the points you just made, which is the degree of uncertainty you had at the time. You guys are a bunch of like a small group of researchers crazily talking about AGI when everybody's laughing at that thought.

0
💬 0

2035.281 - 2041.325 Sam Altman

Wasn't that long ago? Elon was crazily talking about launching rockets. Yeah. When people were laughing at that thought, uh,

0
💬 0

2042.972 - 2067.559 Lex Fridman

So I think he'd have more empathy for this. I mean, I do think that there's personal stuff here, that there was a split, that OpenAI and a lot of amazing people here chose to part ways with Elon. So there's a personal- Elon chose to part ways. Can you describe that exactly, the choosing to part ways?

0
💬 0

2067.819 - 2086.405 Sam Altman

He thought OpenAI was going to fail. He wanted total control to sort of turn it around. We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla to be able to build an AGI effort. Various times he wanted to make OpenAI into a for-profit company that he could have control of or have it merge with Tesla.

0
💬 0

2086.425 - 2090.907 Sam Altman

We didn't want to do that, and he decided to leave, which that's fine.

0
💬 0
0
💬 0

2109.185 - 2114.19 Sam Altman

My memory is the proposal was just like, yeah, get acquired by Tesla and have Tesla have full control over it.

0
💬 0

2114.25 - 2128.803 Lex Fridman

I'm pretty sure that's what it was. So what is the word open in OpenAI mean to Elon at the time? Ilya has talked about this in the email exchanges and all this kind of stuff. What does it mean to you at the time? What does it mean to you now?

0
💬 0

2129.844 - 2155.623 Sam Altman

I would definitely pick a different—speaking of going back with an Oracle, I'd pick a different name— One of the things that I think OpenAI is doing that is the most important of everything that we're doing is putting powerful technology in the hands of people for free as a public good. We don't run ads on our free version. We don't monetize it in other ways. We just say it's part of our mission.

0
💬 0

2156.503 - 2177.574 Sam Altman

We want to put increasingly powerful tools in the hands of people for free and get them to use them. And I think... That kind of open is really important to our mission. I think if you give people great tools and teach them to use them or don't even teach them, they'll figure it out and let them go build an incredible future for each other with that. That's a big deal.

0
💬 0

2177.994 - 2200.581 Sam Altman

So if we can keep putting like free or low cost or free and low cost powerful AI tools out in the world, I think it's a huge deal for how we fulfill the mission. Yeah. Open source or not, yeah, I think we should open source some stuff and not other stuff. It does become this religious battle line where nuance is hard to have, but I think nuance is the right answer.

0
💬 0

2201.602 - 2212.508 Lex Fridman

So he said, change your name to Closed AI and I'll drop the lawsuit. I mean, is it going to become this battleground in the land of memes about the name?

0
💬 0

2212.528 - 2219.012 Sam Altman

I think that speaks to the seriousness with which Elon means the lawsuit.

0
💬 0

2220.678 - 2226.282 Lex Fridman

And I mean, that's like an astonishing thing to say, I think.

0
💬 0

2227.542 - 2239.67 Lex Fridman

Well, I don't think the lawsuit, maybe correct me if I'm wrong, but I don't think the lawsuit is legally serious. It's more to make a point about the future of AGI and the company that's currently leading the way.

0
💬 0

2242.242 - 2254.59 Sam Altman

So look, I mean, Grok had not open sourced anything until people pointed out it was a little bit hypocritical. And then he announced that Grok will open source things this week. I don't think open source versus not is what this is really about for him.

0
💬 0

2255.031 - 2266.459 Lex Fridman

Well, we'll talk about open source and not. I do think maybe criticizing the competition is great. Just talking a little shit. That's great. But friendly competition versus like I personally hate lawsuits.

0
💬 0

2267.419 - 2283.802 Sam Altman

Look, I think this whole thing is like unbecoming of a builder, and I respect Elon as one of the great builders of our time. I know he knows what it's like to have haters attack him. And it makes me extra sad he's doing it to us.

0
💬 0

2284.243 - 2288.387 Lex Fridman

Yeah, he's one of the greatest builders of all time. Potentially the greatest builder of all time.

0
💬 0

2288.687 - 2302.38 Sam Altman

It makes me sad. And I think it makes a lot of people sad. There's a lot of people who've really looked up to him for a long time. I said in some interview or something that I miss the old Elon. And the number of messages I got being like that exactly encapsulates how I feel.

0
💬 0

2302.938 - 2326.787 Lex Fridman

I think you should just win. You should just make X grok be GPT and then GPT beats grok and it's just a competition. And it's beautiful for everybody. But on the question of open source, do you think there's a lot of companies playing with this idea? It's quite interesting. I would say Meta, surprisingly, has led the way on this.

0
💬 0

2327.788 - 2348.558 Lex Fridman

Or at least took the first step in the game of chess of really open sourcing the model. Of course, it's not a state-of-the-art model, but open sourcing Lama. Google is flirting with the idea of open sourcing a smaller version. What are the pros and cons of open sourcing? Have you... Played around with this idea.

0
💬 0

2348.598 - 2364.327 Sam Altman

Yeah, I think there is definitely a place for open source models, particularly smaller models that people can run locally. I think there's huge demand for. I think there will be some open source models. There will be some closed source models. It won't be unlike other ecosystems in that way.

0
💬 0

2365.348 - 2380.798 Lex Fridman

I listened to All In podcast talking about this loss and all that kind of stuff. And they were more concerned about the precedent of going from nonprofit to this cap for profit. what precedent this sets for other startups.

0
💬 0

2380.819 - 2398.056 Sam Altman

Is that something? I don't, I would heavily discourage any startup that was thinking about starting as a nonprofit and adding like a for-profit arm later. I'd heavily discourage them from doing that. I don't think we'll set a precedent here. Okay. So most, most startups should go just. For sure. And again, if we knew what was going to happen, we would have done that too.

0
💬 0

2398.696 - 2423.566 Lex Fridman

In theory, if you dance beautifully here, there's some tax incentives or whatever. I don't think that's how most people think about these things. It's just not possible to save a lot of money for a startup if you do it this way. No, I think there's laws that would make that pretty difficult. Where do you hope this goes with Elon? This tension, this dance, where do you hope this?

0
💬 0

2424.166 - 2434.13 Lex Fridman

If we go one, two, three years from now, your relationship with him on a personal level too, like friendship, friendly competition, just all this kind of stuff.

0
💬 0

2437.931 - 2450.467 Lex Fridman

Yeah, I really respect Elon. And I hope that years in the future we have an amicable relationship.

0
💬 0

2451.548 - 2485.948 Lex Fridman

Yeah, I hope you guys have an amicable relationship like this month. And just compete and win and explore these ideas together. I do suppose there's competition for talent or whatever, but it should be friendly competition. Just build cool shit. And Elon is pretty good at building cool shit. But so are you. So speaking of cool shit, Sora, there's like a million questions I could ask.

0
💬 0

2486.108 - 2510.109 Lex Fridman

First of all, it's amazing. It truly is amazing. On a product level, but also just on a philosophical level. So let me just, technical slash philosophical, ask, what do you think it understands about the world today? more or less than GPT-4, for example. The world model, when you train on these patches versus language tokens.

0
💬 0

2510.129 - 2532.909 Sam Altman

I think all of these models understand something more about the world model than most of us give them credit for. And because there are also very clear things they just don't understand or don't get right, it's easy to look at the weaknesses, see through the veil, and say, oh, this is all fake. But it's not all fake. It's just some of it works and some of it doesn't work.

0
💬 0

2533.889 - 2552.628 Sam Altman

Like, I remember when I started first watching Sora videos and I would see, like, a person walk in front of something for a few seconds and occlude it and then walk away and the same thing was still there. I was like, oh, that's pretty good. Or there's examples where, like, the underlying physics looks so well represented over, you know, a lot of steps in a sequence.

0
💬 0

2552.648 - 2570.796 Sam Altman

It's like, oh, this is, like, quite impressive. Yeah. But, like, fundamentally, these models are just getting better, and that will keep happening. If you look at the trajectory from Dolly 1 to 2 to 3 to Sora, you know, there were a lot of people that were dunked on each version, saying, it can't do this, it can't do that, and, like, look at it now.

0
💬 0

2571.976 - 2590.378 Lex Fridman

Well, the thing you just mentioned is kind of, with occlusions, is basically modeling the physics, the three-dimensional physics of the world, sufficiently well to capture those kinds of things? Well... Or like, yeah, maybe you can tell me, in order to deal with occlusions, what does the world model need to?

0
💬 0

2590.699 - 2599.243 Sam Altman

Yeah, so what I would say is it's doing something to deal with occlusions really well. What I represent that it has like a great underlying 3D model of the world, it's a little bit more of a stretch.

0
💬 0

2599.764 - 2603.706 Lex Fridman

But can you get there through just these kinds of two-dimensional training data approaches?

0
💬 0

2605.144 - 2614.666 Sam Altman

It looks like this approach is going to go surprisingly far. I don't want to speculate too much about what limits it will surmount and which it won't, but... What are some interesting limitations of the system that you've seen?

0
💬 0

2614.686 - 2618.126 Lex Fridman

I mean, there's been some fun ones you've posted.

0
💬 0

2618.146 - 2628.268 Sam Altman

There's all kinds of fun. I mean, like, you know, cats sprouting an extra limb at random points in a video. Like, pick what you want, but there's still a lot of problems, a lot of weaknesses.

0
💬 0

2628.448 - 2645.336 Lex Fridman

Do you think that's a fundamental flaw of the approach? Or is it just bigger model or better technical details or better data, more data is going to solve the cat sprouting? I would say yes to both.

0
💬 0

2645.922 - 2656.026 Sam Altman

Like, I think there is something about the approach which just seems to feel different from how we think and learn and whatever. And then also, I think it'll get better with scale.

0
💬 0

2656.767 - 2674.334 Lex Fridman

Like I mentioned, LLMs have tokens, text tokens, and Sora has visual patches. So it converts all visual data, diverse kinds of visual data, videos, and images into patches. Is the training, to the degree you can say, fully self-supervised? Or is there some manual labeling going on? Like, what's the involvement of humans in all this?

0
💬 0

2675.936 - 2684.909 Sam Altman

I mean, without saying anything specific about the SOAR approach, we use lots of human data in our work.

0
💬 0

2686.727 - 2710.325 Lex Fridman

but not internet-scale data. So lots of humans. Lots is a complicated word, Sam. I think lots is a fair word in this case. But it doesn't, because to me, lots, like listen, I'm an introvert, and when I hang out with like three people, that's a lot of people. Four people, that's a lot. But I suppose you mean more than... More than three people work on labeling the data for these models, yeah.

0
💬 0

2710.365 - 2737.222 Lex Fridman

Okay, all right. But fundamentally, there's a lot of... self-supervised learning because what you mentioned in the technical report is internet scale data that's another beautiful it's like poetry uh so it's a lot of data that's not human label it's like it's self-supervised in that way yeah and then the question is how much how much data is there on the internet that could be used in this that uh

0
💬 0

2738.061 - 2748.167 Lex Fridman

It's conducive to this kind of self-supervised way. If only we knew the details of the self-supervised. Have you considered opening it up a little more in details?

0
💬 0

2748.708 - 2749.028 Sam Altman

We have.

0
💬 0

2749.308 - 2763.497 Lex Fridman

You mean for Sora specifically? Sora specifically. Because it's so interesting that, like, can the same magic of LLMs now start moving towards visual data? And what does that take to do that?

0
💬 0

2764.707 - 2767.85 Sam Altman

I mean, it looks to me like yes, but we have more work to do.

0
💬 0

2768.21 - 2775.157 Lex Fridman

Sure. What are the dangers? Why are you concerned about releasing the system? What are some possible dangers of this?

0
💬 0

2775.177 - 2800.74 Sam Altman

I mean, frankly speaking, one thing we have to do before releasing the system is just like get it to work at a level of efficiency that will deliver the scale people are going to want from this. So I don't want to like downplay that. And there's still a ton, ton of work to do there. But, you know, you can imagine like issues with deep fakes, misinformation.

0
💬 0

2801.96 - 2810.484 Sam Altman

We try to be a thoughtful company about what we put out into the world, and it doesn't take much thought to think about the ways this can go badly.

0
💬 0

2811.284 - 2820.368 Lex Fridman

There's a lot of tough questions here. You're dealing in a very tough space. Do you think training AI should be or is fair use under copyright law?

0
💬 0

2820.968 - 2827.571 Sam Altman

I think the question behind that question is, do people who create valuable data deserve to have some way that they get compensated for?

0
💬 0

2828.291 - 2856.445 Sam Altman

use of it and that i think the answer is yes i don't know yet what the answer is people have proposed a lot of different things we've tried some different models but you know if i'm like an artist for example a i would like to be able to opt out of people generating art in my style and b if they do generate art in my style i'd like to have some economic model associated with that yeah it's that uh transition from cds to napster to spotify

0
💬 0

2857.483 - 2858.824 Lex Fridman

We have to figure out some kind of model.

0
💬 0

2858.844 - 2860.625 Sam Altman

The model changes, but people have got to get paid.

0
💬 0

2861.866 - 2868.071 Lex Fridman

Well, there should be some kind of incentive, if we zoom out even more, for humans to keep doing cool shit.

0
💬 0

2868.472 - 2882.983 Sam Altman

Everything I worry about, humans are going to do cool shit, and society is going to find some way to reward it. I... That seems pretty hardwired. We want to create. We want to be useful. We want to achieve status in whatever way. That's not going anywhere, I don't think.

0
💬 0

2883.263 - 2891.548 Lex Fridman

But the reward might not be monetary, financial. It might be like fame and celebration of other cool people.

0
💬 0

2891.588 - 2896.891 Sam Altman

Maybe financial in some other way. Again, I don't think we've seen the last evolution of how the economic system is going to work.

0
💬 0

2897.682 - 2902.985 Lex Fridman

Yeah, but artists and creators are worried. When they see Sora, they're like, holy shit. Sure.

0
💬 0

2903.126 - 2905.787 Sam Altman

Artists were also super worried when photography came out.

0
💬 0

2906.228 - 2906.468 Lex Fridman

Yeah.

0
💬 0

2906.868 - 2915.253 Sam Altman

And then photography became a new art form and people made a lot of money taking pictures. And I think things like that will keep happening. People will use the new tools in new ways.

0
💬 0

2916.414 - 2927.004 Lex Fridman

If we just look on YouTube or something like this, how much of that will be using Sora like... AI-generated content, do you think, in the next five years?

0
💬 0

2927.725 - 2945.129 Sam Altman

People talk about how many jobs they're going to do in five years. And the framework that people have is what percentage of current jobs are just going to be totally replaced by some AI doing the job. The way I think about it is not what percent of jobs AI will do, but what percent of tasks will AI do and over what time horizon.

0
💬 0

2945.81 - 2963.469 Sam Altman

So if you think of all of the five-second tasks in the economy, the five-minute tasks, the five-hour tasks, maybe even the five-day tasks, how many of those can AI do? And I think that's a way more interesting, impactful, important question than how many jobs AI can do.

0
💬 0

2963.869 - 2979.879 Sam Altman

Because it is a tool that will work at increasing levels of sophistication and over longer and longer time horizons for more and more tasks and let people operate at a higher level of abstraction. So maybe people are way more efficient at the job they do. And at some point,

0
💬 0

2980.379 - 3003.665 Sam Altman

That's not just a quantitative change, but it's a qualitative one, too, about the kinds of problems you can keep in your head. I think that for videos on YouTube, it'll be the same. Many videos, maybe most of them, will use AI tools in the production, but they'll still be fundamentally driven by a person thinking about it, putting it together, doing parts of it, sort of directing and running it.

0
💬 0

3003.685 - 3028.941 Lex Fridman

Yeah, it's so interesting. I mean, it's scary, but it's interesting to think about. I tend to believe that humans like to watch other humans or other human- Humans really care about other humans a lot. Yeah. If there's a cooler thing that's better than a human, humans care about that for like two days and then they go back to humans. That seems very deeply wired. Yeah. It's the whole chess thing.

0
💬 0

3029.941 - 3040.407 Lex Fridman

Yeah, but now let's everybody keep playing chess. And let's ignore the elephant in the room that humans are really bad at chess relative to AI systems. We still run races and cars are much faster.

0
💬 0

3040.427 - 3041.808 Sam Altman

I mean, there's like a lot of examples.

0
💬 0

3042.148 - 3062.994 Lex Fridman

Yeah. And maybe it'll just be tooling like in the Adobe suite type of way where you can just make videos much easier and all that kind of stuff. Listen, I hate being in front of the camera. If I can figure out a way to not be in front of the camera, I would love it. Unfortunately, it would take a while. Like that, generating faces.

0
💬 0

3063.434 - 3091.238 Lex Fridman

It's getting there, but generating faces in video format is tricky when it's specific people versus generic people. Let me ask you about GPT-4. There's so many questions. First of all, also amazing. Looking back, it'll probably be this kind of historic pivotal moment with 3.5 and 4 with Chad GPT. Maybe 5 will be the pivotal moment. I don't know. Hard to say that looking forwards. We never know.

0
💬 0

3091.278 - 3117.228 Lex Fridman

That's the annoying thing about the future. It's hard to predict. But for me, looking back, GPT-4, Chad GPT is pretty damn impressive, like historically impressive. So allow me to ask, what's been the most impressive capabilities of GPT-4 to you and GPT-4 Turbo? I think it kind of sucks. Typical human also. Gotten used to an awesome thing.

0
💬 0

3117.808 - 3143.633 Sam Altman

No, I think it is an amazing thing. But relative to where we need to get to and where I believe we will get to, you know, at the time of like GPT-3, people were like, oh, this is amazing. This is this like marvel of technology. And it is. It was. But, you know, now we have GPT-4 and You look at GPT-3 and you're like, that's unimaginably horrible.

0
💬 0

3145.374 - 3164.768 Sam Altman

I expect that the delta between 5 and 4 will be the same as between 4 and 3. And I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck looking backwards at them. And that's how we make sure the future is better.

0
💬 0

3165.709 - 3170.872 Lex Fridman

What are the most glorious ways that GPT-4 sucks? Meaning?

0
💬 0

3170.892 - 3172.213 Sam Altman

What are the best things it can do?

0
💬 0

3172.713 - 3181.177 Lex Fridman

What are the best things it can do and the limits of those best things that allow you to say it sucks, therefore gives you inspiration and hope for the future.

0
💬 0

3182.398 - 3188.941 Sam Altman

You know, one thing I've been using it for more recently is sort of like a brainstorming partner.

0
💬 0

3189.601 - 3189.761 Lex Fridman

Yep.

0
💬 0

3190.86 - 3217.83 Sam Altman

And there's a glimmer of something amazing in there. I don't think it gets, you know, when people talk about it, what it does, they're like, oh, it helps me code more productively. It helps me write more faster and better. It helps me, you know, translate from this language to another. All these like amazing things, but. There's something about the creative brainstorming partner.

0
💬 0

3217.85 - 3234.282 Sam Altman

I need to come up with a name for this thing. I need to think about this problem in a different way. I'm not sure what to do here. That I think gives a glimpse of something I hope to see more of. One of the other things that you can see a very small glimpse of is

0
💬 0

3236.138 - 3248.902 Sam Altman

when I can help on longer horizon tasks, you know, break down something into multiple steps, maybe like execute some of those steps, search the internet, write code, whatever, put that together. When that works, which is not very often, it's like very magical.

0
💬 0

3250.586 - 3252.647 Lex Fridman

The iterative back and forth with a human.

0
💬 0

3253.427 - 3272.035 Sam Altman

It works a lot for me. What do you mean? Uh, iterative back and forth with a human, it can get more often when it can go do like a 10 step problem on its own. Oh, it doesn't work for that too often. Sometimes. At multiple layers of abstraction, or do you mean just sequential? Both like, you know, to break it down and then do things that different layers of abstraction and put them together.

0
💬 0

3273.456 - 3288.38 Sam Altman

Look, I don't want to, I don't want to like downplay the accomplishment of GPT-4. Um, But I don't want to overstate it either. And I think this point that we are on an exponential curve, we will look back relatively soon at GPT-4 like we look back at GPT-3 now.

0
💬 0

3289.929 - 3305.323 Lex Fridman

That said, I mean, ChatGPT was a transition to where people started to believe it. There was a kind of, there is an uptick of believing. Not internally at OpenAI, perhaps. There's believers here, but when you think about Google.

0
💬 0

3305.363 - 3323.722 Sam Altman

And in that sense, I do think it'll be a moment where a lot of the world went from not believing to believing. That was more about the ChatGPT interface than the... And by the interface and product, I also mean the post-training of the model and how we tune it to be helpful to you and how to use it, then the underlying model itself.

0
💬 0

3324.422 - 3341.205 Lex Fridman

How much of those two, each of those things are important? The underlying model and the RLHF or something of that nature that tunes it to be more compelling to the human, more effective and productive for the human?

0
💬 0

3341.525 - 3365.017 Sam Altman

I mean, they're both super important, but the RLHF, the post-training step, the little wrapper of things that, from a compute perspective, little wrapper of things that we do on top of the base model, even though it's a huge amount of work, that's really important to say nothing of the product that we build around it. In some sense, we did have to do two things.

0
💬 0

3365.037 - 3382.678 Sam Altman

We had to invent the underlying technology, and then we had to figure out how to make it into a product people would love, which is not just about the actual product work itself, but this whole other step of how you align and make it useful.

0
💬 0

3383.559 - 3388.926 Lex Fridman

And how you make the scale work where a lot of people can use it at the same time, all that kind of stuff.

0
💬 0

3389.086 - 3405.853 Sam Altman

And that. But... You know, that was like a known difficult thing. Like we knew we were going to have to scale it up. We had to go do two things that had like never been done before that were both like I would say quite significant achievements. And then a lot of things like scaling it up that other companies have had to do before.

0
💬 0

3407.575 - 3418.783 Lex Fridman

How does the context window of going from 8K to 128K tokens compare from GPT-4 to GPT-4 Turbo?

0
💬 0

3419.403 - 3441.397 Sam Altman

Most people don't need all the way to 128 most of the time, although if we dream into the distant future, like way distant future, we'll have context length of several billion. You will feed in all of your information, all of your history over time, and it'll just get to know you better and better, and that'll be great. So for now, the way people use these models, they're not doing that.

0
💬 0

3441.777 - 3455.204 Sam Altman

And, you know, people sometimes post in a paper or, you know, a significant fraction of a code repository or whatever. But most usage of the models is not using the long context most of the time.

0
💬 0

3456.165 - 3472.068 Lex Fridman

I like that this is your I have a dream speech. One day you'll be judged by the full context of your character or of your whole lifetime. That's interesting. So that's part of the expansion that you're hoping for is a greater and greater context.

0
💬 0

3472.789 - 3484.057 Sam Altman

I saw this internet clip once. I'm going to get the numbers wrong, but it was like Bill Gates talking about the amount of memory on some early computer. Maybe 64K, maybe 640K, something like that. And most of it was used for the screen buffer.

0
💬 0
0
💬 0

3485.601 - 3506.848 Sam Altman

And he just couldn't seem genuine in this, couldn't imagine that the world would eventually need gigabytes of memory in a computer or terabytes of memory in a computer. And you always do. Or you always do just need to like follow the exponential of technology. And we're going to like, we will find out how to use better technology.

0
💬 0

3507.289 - 3521.5 Sam Altman

So I can't really imagine what it's like right now for context links to go out to the billions someday. And they might not literally go there, but effectively it'll feel like that. But I know we'll use it and really not want to go back once we have it.

0
💬 0

3522.501 - 3544.849 Lex Fridman

Yeah. Even saying billions 10 years from now might seem dumb because it'll be like, Trillions upon trillions. Sure. There'll be some kind of breakthrough that will effectively feel like infinite context. But even 120, I have to be honest, I haven't pushed it to that degree. Maybe putting in entire books or like parts of books and so on. Papers.

0
💬 0

3546.87 - 3549.132 Lex Fridman

What are some interesting use cases of GPT-4 that you've seen?

0
💬 0

3549.712 - 3567.765 Sam Altman

The thing that I find most interesting is not any particular use case that we can talk about those, but it's people who kind of like This is mostly younger people, but people who use it as like their default start for any kind of knowledge work task. And it's the fact that it can do a lot of things reasonably well.

0
💬 0

3568.145 - 3577.511 Sam Altman

You can use GPTV, you can use it to help you write code, you can use it to help you do search, you can use it to like edit a paper. The most interesting thing to me is the people who just use it as the start of their workflow.

0
💬 0

3578.451 - 3601.954 Lex Fridman

I do as well for many things. I use it as a reading partner for reading books. It helps me think through ideas, especially when the books are classics, so it's really well written about. It actually is as... I find it often to be significantly better than even Wikipedia on well-covered topics. It's somehow more balanced and more nuanced.

0
💬 0

3602.514 - 3621.342 Lex Fridman

Or maybe it's me, but it inspires me to think deeper than a Wikipedia article does. I'm not exactly sure what that is. You mentioned this collaboration. I'm not sure where the magic is, if it's in here or if it's in there, or if it's somewhere in between. I'm not sure. But one of the things that concerns me for knowledge tasks when I start with GPT is

0
💬 0

3622.441 - 3640.768 Lex Fridman

I'll usually have to do fact checking after, like check that it didn't come up with fake stuff. How do you figure that out, that GPT can come up with fake stuff that sounds really convincing? So how do you ground it in truth?

0
💬 0

3641.681 - 3652.646 Sam Altman

That's obviously an area of intense interest for us. I think it's going to get a lot better with upcoming versions, but we'll have to continue to work on it, and we're not going to have it all solved this year.

0
💬 0

3653.286 - 3659.149 Lex Fridman

Well, the scary thing is, as it gets better, you'll start not doing the fact-checking more and more, right?

0
💬 0

3659.589 - 3673.47 Sam Altman

I... I'm of two minds about that. I think people are like much more sophisticated users of technology than we often give them credit for. And people seem to really understand that GPT, any of these models hallucinate some of the time. And if it's mission critical, you got to check it.

0
💬 0

3673.95 - 3679.151 Lex Fridman

Except journalists don't seem to understand that. I've seen journalists half-assedly just using GPT for.

0
💬 0

3680.631 - 3685.012 Sam Altman

Of the long list of things I'd like to dunk on journalists for, this is not my top criticism of them.

0
💬 0

3686.311 - 3708.099 Lex Fridman

Well, I think the bigger criticism is perhaps the pressures and the incentives of being a journalist is that you have to work really quickly and this is a shortcut. I would love our society to incentivize like I would too. Long, like a journalist, journalistic efforts that take days and weeks and rewards great in-depth journalism.

0
💬 0

3708.48 - 3728.003 Lex Fridman

Also journalism that presents stuff in a balanced way where it's like celebrates people while criticizing them, even though the criticism is the thing that gets clicks. And making shit up also gets clicks. And headlines that mischaracterize completely. I'm sure you have a lot of people dunking on, well, all that drama probably got a lot of clicks. Probably did.

0
💬 0

3730.744 - 3750.881 Lex Fridman

And that's a bigger problem about human civilization. I'd love to see solved. It's where we celebrate a bit more. You've given ChatGPT the ability to have memories. You've been playing with that about previous conversations. And also the ability to turn off memory. I wish I could do that sometimes. Just turn on and off, depending.

0
💬 0

3751.641 - 3762.144 Lex Fridman

I guess sometimes alcohol can do that, but not optimally, I suppose. What have you seen through that, like playing around with that idea of remembering conversations or not?

0
💬 0

3762.664 - 3782.504 Sam Altman

We're very early in our explorations here, but I think what people want, or at least what I want for myself, is a model that gets to know me and gets more useful to me over time. This is an early exploration. I think there's a lot of other things to do, but that's where we'd like to head.

0
💬 0

3782.844 - 3791.308 Sam Altman

You'd like to use a model and over the course of your life, or use a system, there'll be many models, and over the course of your life, it gets better and better.

0
💬 0

3792.109 - 3809.57 Lex Fridman

Yeah, how hard is that problem? Because right now it's more like remembering little factoids and preferences and so on. What about remembering, like, don't you want GPT to remember all the shit you went through in November? And all the drama. Yeah, yeah, yeah. Because right now you're clearly blocking it out a little bit.

0
💬 0

3809.59 - 3835.074 Sam Altman

It's not just that I want it to remember that. I want it to integrate the lessons of that. Yes. And remind me in the future... What to do differently or what to watch out for. And, you know, we all gain from experience over the course of our lives, varying degrees. And I'd like my AI agent to gain with that experience too.

0
💬 0

3836.474 - 3854.481 Sam Altman

So if we go back and let ourselves imagine that, you know, trillions and trillions of context length. If I can put every conversation I've ever had with anybody in my life in there, if I can have all of my emails input out, like all of my input output in the context window every time I ask a question, that'd be pretty cool, I think.

0
💬 0

3855.662 - 3873.65 Lex Fridman

Yeah, I think that would be very cool. People sometimes will hear that and be concerned about privacy. What do you think about that aspect of it? The more effective the AI becomes at really... integrating all the experiences and all the data that happened to you and give you advice?

0
💬 0

3873.67 - 3891.275 Sam Altman

I think the right answer there is just user choice. You know, anything I want stricken from the record from my AI agent, I want to be able to like take out. If I don't want it to remember anything, I want that too. you and I may have different opinions about where on that privacy utility trade-off for our own AI we want to be, which is totally fine.

0
💬 0

3892.156 - 3894.258 Sam Altman

But I think the answer is just, like, really easy user choice.

0
💬 0

3894.679 - 3915.896 Lex Fridman

But there should be some high level of transparency from a company about the user choice. Because sometimes companies in the past have been kind of shady about, like, yeah, it's kind of presumed that we're collecting all your data and we're using it for a good reason, for advertisement and so on. But there's not a transparency about the details of that.

0
💬 0

3917.397 - 3920.779 Sam Altman

That's totally true. You know, you mentioned earlier that I'm like blocking out the November stuff.

0
💬 0

3921.139 - 3921.88 Unknown

I'm just teasing you.

0
💬 0

3922.14 - 3947.154 Sam Altman

Well, I mean, I think it was a very traumatic thing and it did immobilize me for a long period of time. Like definitely the hardest thing Like the hardest work that I've had to do was just like keep working that period because I had to like, you know, try to come back in here and put the pieces together while I was just like in sort of shock and pain. And, you know, nobody really cares about that.

0
💬 0

3947.174 - 3961.191 Sam Altman

I mean, the team gave me a pass and I was not working on my normal level, but there was a period where I was just like, It was really hard to have to do both. But I kind of woke up one morning and I was like, this was a horrible thing that happened to me. I think I could just feel like a victim forever.