All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Fri, 27 Sep 2024
(0:00) Bestie intros: In Memoriam (6:43) OpenAI's $150B valuation: bull and bear cases (24:46) Will AI hurt or help SaaS incumbents? (40:41) Implications from OpenAI's for-profit conversion (49:57) Meta's impressive new AR glasses: is this the killer product for the age of AI? (1:09:05) Blue collar boom: trades are becoming more popular with young people as entry-level tech jobs dry up (1:20:55) Risk of nuclear war increasing Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://www.reuters.com/technology/artificial-intelligence/openais-stunning-150-billionvaluation-hinges-upending-corporate-structure-2024-09-14/ https://www.bloomberg.com/news/articles/2024-09-25/openai-cto-mira-murati-says-she-will-leave-the-company https://x.com/chiefaioffice https://openai.com/our-structure https://x.com/unusual_whales/status/1658664383717978112 https://x.com/elonmusk/status/1839121268521492975 https://www.politico.com/news/2024/08/26/zuckerberg-meta-white-house-pressure-00176399 https://appleinsider.com/articles/12/12/28/early-apple-prototypes-by-frog-designs-hartmut-esslinger-featured-in-upcoming-book https://www.cnbc.com/2024/09/16/the-toolbelt-generation-why-teens-are-losing-faith-in-college.html https://www.wsj.com/tech/tech-jobs-artificial-intelligence-cce22393 https://layoffs.fyi https://educationdata.org/college-enrollment-statistics https://www.bloomberg.com/news/articles/2024-09-20/zelenskiy-to-push-us-for-nato-invite-weapons-guarantees
All right, everybody, let's get the show started here. Jason, why are you wearing a tux? What's going on there?
Oh, well, it's time for a very emotional segment we do here on the all in podcast. I just got to get myself composed for this.
Jason, are you okay?
I'm going to be okay, I think.
It looks like you're fighting back a tear.
Yeah, this is always a tough one. This year, we tragically lost giants in our industry. These individuals bravely honed their craft at OpenAI before departing. Ilya Suskiver, he left us in May. Jan Laika, also left in May. John Shulman tragically left us in August.
Wait, these are all OpenAI employees? Yes.
Jared Sof left on Wednesday. Bob McGrew also left on Wednesday.
Too short. Too short.
And Mira Marotti also left us tragically on Wednesday.
We lost Mira too?
Yeah. And Greg Brockman is on extended leave.
The enforcer? He left too?
Thank you for your service. Your memories will live on as training data. And may your memories be a vesting.
Let your winners ride.
Rain Man David Sasson.
Love you guys. Queen of Kinwa. I'm doing all in.
Sorry, guys. Oh, my goodness. All those losses. Wow. That is. Three in one day. Three in one day. My goodness. I thought OpenAI was nothing without its people.
Well, I mean, this is a great. Whoa, we lost somebody. Whoa. What's happening? Wait, what?
This is like the photo in Back to the Future.
Wow, they're just all gone. Wait, oh no, don't worry, he's replacing everybody. Here we go. He's replacing with the G700, a Bugatti, and I guess Sam's got mountains of cash. So don't worry, he's got a backup plan, Shamaz.
Anyway, as an industry and as leaders in the industry, the show sends its regards to Sam and the OpenAI team on their tragic losses and congratulations on the $150 billion valuation and your 7%. Sam now just cashed in $10 billion apparently. So congratulations to friend of the pod, Sam Oman. Is the round done?
That's all reportedly out of some article, right? That's not like confirmed or anything.
Is all of that done? I mean, it's reportedly allegedly that he's going to have 7% of the company and we can jump right into our first story.
I mean, what I'm saying is, has the money been wired and the docs been signed?
According to reports, this round is contingent on not being a non-profit anymore and sorting that all out.
They have to remove the profit cap and do the C-Corp.
There was some article that reported this, right? None of us have firsthand.
Well, it was Bloomberg. It's not some article. It was Bloomberg, and it got a lot of traction, and it was re-reported by a lot of places, and I don't see anyone disputing it.
So is mainstream media? We trust the mainstream media in this case because it aligns with Sachs' interest.
When we can do a good bit, yeah. That's mine. No, I think that Bloomberg reported it based on, obviously, talks that are ongoing with investors who have committed to this round. And no one's disputing it. Has anyone said it's not true?
This has been speculated for months. The $150 billion valuation raising something in the range of $6 to $7 billion. If you do the math on that, and Bloomberg is correct, that Sam Altman got his 7%. I guess that would be $10 billion.
You can't raise $6 billion without probably meeting with a few dozen firms. And some number of junior people in those few dozen firms are having a conversation or two with reporters. So you can kind of see how it gets out.
All right. And before we get to our first story there about OpenAI, congratulations to Chamath. Let's pull up the photo here. He was a featured guest. On the Alex Jones show. No, sorry. I'm sorry. That would be Joe Rogan. Congratulations on coming to Austin and being on Joe Rogan. What was it like to do a three-hour podcast with Joe Rogan?
It's great. I mean, I loved it. He's really awesome. He's super cool. It's good to do long form stuff like this so that I can actually talk.
Clearly is the limitation of this podcast is the other three of us. Finally, you have found a way to make it about yourself.
No, I saw a comment. Somebody commented like, oh, wow, it's like amazing to hear Chamath expand on topics throughout the constant eruptions by J-Cal.
Also known as moderation.
Someone called me, someone called me Frohman or Frohman from the 70s show. That was funny. The amount of trash talking in Rogan's YouTube comments, it's next level. It is. I mean, it is. It is the wild, wild west in terms of the comment section on YouTube.
Yeah. A bunch of comments asking. Jake, why do you call it Alex Jones?
Is that because he's... It's just a Texas short podcaster who's short and stout and they look similar. So it's just a... But I mean, it looks like Alex Jones started lifting weights, actually. No, they're both... The same height and yeah, both have podcasts.
I saw Joe Rogan 25 years ago doing stand-up. I have a photo with him at the club. It was like a small club in San Francisco and we hung out with him afterwards. He was just like a nobody back in the day. He was like a stand-up guy, right? Now he's a media uber star.
Well, you have to go back pretty far for Joe Rogan to be a nobody. I mean, he had a TV show for a long time.
Two of them, in fact.
He was more like a stand-up comic for a while. He was a stand-up comic. Stand-up comic, yeah.
Fear Factor, that's right.
But didn't he also do Survivor or one of those? And then the UFC. I mean, this guy's got four distinct careers.
I feel like that's where he blew up. UFC, yeah.
Yeah. Well, I mean, I think he got the UFC out of Fear Factor and being a UFC fighter and a comedian. And there's like a famous story where like, Dana White was pursuing him. And he was like, I don't know. And then Dana White's like, I'll send a plane for you. You can bring your friends. He's like, okay, fine, I'll do it. He did it for free.
And then Dana White pursued him heavily to become the voice of the UFC. And yeah, obviously, it's grown tremendously. And it's worth billions of dollars. Okay.
How is OpenAI worth $150 billion? Can anyone...
Well, why don't we get into the topic?
Should we make the bull case and the bear case?
All right. OpenAI, as we were just joking in the opening segment, is trying to convert into a for-profit benefit corporation. That's a B Corp. It just means, we'll explain B Corp later.
Sam Altman is reportedly... I thought they're converting to a C-Corp, no?
It's the same thing.
B-Corp doesn't really mean anything.
A benefit corporation is a C-Corporation variant that is not a non-profit, but the board of directors, Sachs, is required not only to be a fiduciary for all shareholders, but also for the stated mission of the company. That's my understanding of a B-Corp, am I right? Freeberg?
External stakeholders, yeah. So like the environment or society or whatever. But from all other kind of legal tax factors, it's the same as a C Corp.
And it's a way to, I guess, signal to investors, the market employees that you care about something more than just profit. So famous, most famous B Corp, I think is Tom's. Is that the shoe company, Tom's? That's a famous B Corp. Somebody will look it up here.
Patagonia.
Patagonia. Yeah, that falls into that category. So for profit with a mission. Reuters has cited anonymous sources close to the company, that the plan is still being hashed out with lawyers and shareholders and the timeline isn't certain. But what's being discussed is that the nonprofit will continue to exist as a minority shareholder in the new company.
How much of a minority shareholder, I guess, is the devil's in the detail there. Do they own 1% or 49%? The very much discussed Friedberg 100x profit cap for investors will be removed That means investors like Vinod, friend of the pod, and Reid Hoffman, also friend of the pod, could see a 100x turn into 1,000x or more.
According to the Bloomberg report, Sam Waltman's going to get his equity finally 7%. That would put him at around $10.5 billion, if this is all true. And OpenAI could be valued as high as $150 billion. We'll get into all the shenanigans. But let's start with your question, Freeberg. And since you asked it, I'm going to boomerang it back to you. Make the bull case for $150 billion valuation.
The bull case would be that the moat in the business with respect to model performance and infrastructure gets extended with the large amount of capital that they're raising. They aggressively deploy it. They are very strategic and tactical with respect to how they deploy that infrastructure.
to continue to improve model performance and as a result, continue to extend their advantage in both consumer and enterprise applications, the API tools and so on that they offer. And so they can maintain both kind of model and application performance leads that they have today. Across the board, I would say like the O1 model,
Their voice application, Sora has not been released publicly, but if it does, and it looks like what it's been demoed to be, it's certainly ahead of the pack. So there's a lot of aspects of of open AI today that kind of makes them a leader.
And if they can deploy infrastructure to maintain that lead and not let Google, Microsoft, Amazon, and others catch up, then their ability to use that capital wisely keeps them ahead. And ultimately, as we all know, there's a multi-trillion dollar market to capture here, making lots of verticals, lots of applications, lots of products. So they could become a true kind of global player here.
Plus the extension into computing, which I'm excited to talk about later when we get into this computing stuff.
SACS. Here's a chart of open areas revenue growth that has been piecemeal together from various sources at various times. But you'll see here they are reportedly as of June of 2024, on a $3.4 billion run rate for this year, after hitting $2 billion in 23, $1.3 billion in October of 23. And then back in 2022, it's reported they only had $28 million in revenue.
So this is a pretty big streak here in terms of revenue growth. I would put it at 50 times top line revenue, $150 billion valuation. You want to give us the bear case, maybe, or the bull case?
Well, so the whisper numbers I heard was that their revenue run rate for this year was in the $4 to $6 billion range, which is a little higher than that. So you're right, if it's really more like 3.4, this valuation is about 50 times current revenue. But if it's more like 5 billion, then it's only 30 times. And if it's growing 100% year over year, it's only 15 times next year.
So depending what the numbers actually are, the $150 billion valuation could be warranted. I don't think 15 times... Ford, ARR is a high valuation for a company that has this kind of strategic opportunity. I think it all comes down to the durability of its comparative advantage here. I think there's no question that OpenAI is the leader of the pack. It has the most advanced AI models.
It's got the best developer ecosystem, the best APIs. It keeps rolling out new products. And the question is just how durable that
advantages is there really a moat to any of this for example meta just announced llama 3.2 which can do voice and this is roughly at the same time that open ai just released its voice api so the open source ecosystem is kind of hot on open ai's heels the large companies google microsoft so forth. They're hot on their heels too, although it seems like they're further behind where Meta is.
And the question is just, can OpenAI maintain its lead? Can it consolidate its lead? Can it develop some moats? If so, it's on track to be the next trillion dollar big tech company. But if not, it could be eroded and you could see the value of OpenAI get commoditized. And we'll look back on it as kind of a cautionary tale.
Okay, Chamath, do us a favor here. If there is a bear case, what is it?
Okay, let's steel man the bear case. Yes, that's what I'm asking, please. So one would just be on the fundamental technology itself. And I think the version of that story would go that the underlying Frameworks that people are using to make these models great is well described and available in open source.
On top of that, there are at least two viable open source models that are as good or better at any point in time than open AI. So what that would mean is that the value of those models, the economic value basically goes to zero and it's a consumer surplus for the people that use it. So that's very hard theoretically to monetize.
I think the second part of the bear case would be that specifically meta becomes much more aggressive in inserting meta AI into all of the critical apps that they control, because those apps really are the front door to billions of people on a daily basis.
So that would mean WhatsApp, Instagram, Messenger, the Facebook app, and Threads gets refactored in a way where instead of leaving that application to go to a chat GPT-like app, you would just stay in the app. And then the companion to that would be that Google also does the same thing with their version in front of search.
So those two big front doors to the internet become much more aggressive in giving you a reason to not have to go to ChatGPT because A, their answers are just as good, and B, they're right there in a few less clicks for you. So that would be the second piece. The third piece is that all of these models basically run out of viable data to differentiate themselves.
And it basically becomes a race around synthetic information and synthetic data, which is a cost problem. Meaning if you're going to invent synthetic data, you're going to have to spend money to do it. And the large companies, Facebook, Microsoft, Amazon, Google, Apple, have effectively infinite money compared to any startup. Hmm.
And then the fourth, which is the most quizzical one, is what does the human capital thing tell you about what's going on? It reads a little bit like a telenovela. I have not in my time in Silicon Valley ever seen a company that's supposedly on such a straight line to a rocket ship have so much high-level churn. But I've also never seen a company have this much liquidity.
And so how are people deciding to leave if they think it's going to be a trillion dollar company? And why, when things are just starting to cook, would you leave if you are technically enamored with what you're building? So if you had to construct the Barrett case, I think those would be the four things. Open source, front door competition.
The move to synthetic data and all of the executive turnover would be sort of why you would say maybe there's a fire where there's all this smoke.
Okay, I think this is very well put. And I have been using... ChatGPT and Claude and Gemini exclusively. I stopped using Google Search. And I also stopped, Sax, asking people on my team to do stuff before I asked ChatGPT to do it, specifically Freebird, the 01 version. And the 01 version is distinctly different. Have you gentlemen been using 01 like on a daily basis?
Yes.
Okay. So we can have a really interesting conversation here. I did something on my other podcast, This Week in Startups, that I'll show you right now. That was crazy yesterday.
01 is a game changer. Yes. It's the first real chain of thought production system that I think we've seen.
Are you using 01 Preview or 01 Mini?
I am using a one preview. Now let me show you what I did here. Just so the audience can level set here. If you're not watching us go to YouTube and type in all in and you can you can watch us we do video here. So I was analyzing, you know, just some early stage deals and cap tables and I put in here, hey, a startup just raised some money at this valuation.
Here's what the friends and family invested the accelerator, the seed investor, etc. In other words, like the history, the investment history in a company. what O1 does distinctly differently than the previous versions. And the previous version, I felt, was three to six months ahead of competitors. This is a year ahead of competitors.
And so here, Chamath, if you look, it said it thought for 77 seconds. And if you click the down arrow, Sax, what you'll see is it gives you an idea of what its rationale is for interpreting and what secondary queries it's doing
in order to give the answer. This is called chain of thought. Right. And this is the underlying mega model that sits on top of the LLMs. And the mega model, effectively, the chain of thought approach is the model asks itself the question, how should I answer this question? Right. And then it comes up with an answer.
And then it says, now, based on that, what are the steps I should take to answer the question? So the model keeps asking itself questions related to the structure of the question that you ask. And then it comes up with a series of steps that it can then call the LLM to do to fill in the blanks, link them all together and come up with the answer.
It's the same way that a human train of thought works. And it really is the kind of, ultimate evolution of what a lot of people have said these systems need to become, which is a much more, call it intuitive approach to answering questions rather than just predictive text based on the single statement you made. And it really is changing the game and everyone is going to chase this and follow this.
It is the new paradigm for how these AI kind of systems will work.
And by the way, what this did was what prompt engineers were doing or prompt engineering websites were doing, which was trying to help you construct your question. And so if you look to this one, it says listing disparities, I'll compile a cap table with investments, evaluations, building the cap table, accessing the share evaluation, breaking down ownership, breaking down ownership,
etc, evaluating the terms, and then it checks its work a bit, it waits investment options. And you can see this is a this is fired off like two dozen different queries to as Freiberg correctly pointed out, you know, build this chain. And it got incredible answers, explain the form is so it's thinking about what your next question would be.
And this when I share this with my team, it was like a super game changer. Sachs, you had some thoughts here.
Well, yeah, I mean, this is pretty impressive. And just to build on what Freeberg was saying about chain of thought, where this all leads is to agents, where you can actually tell the AI to do work for you, you give it an objective, it can break the objective down into tasks, and then it can work each of those tasks.
And OpenAI at a recent meeting with investors said that PhD level reasoning was next on its roadmap, and then agents weren't far behind that. they've now released at least the preview of the PhD-level reasoning with this O1 model. So I think we can expect an announcement pretty soon about agents.
Yeah, and so... And if you think about business value, we think a lot about this as like, where's the SaaS opportunity in all this, the software as a service opportunity? It's going to be in agents. I think we'll ultimately look back on these sort of chat models as a little bit of a parlor trick compared to what agents are going to do. in the workplace.
If you've ever been to a call center or an operations center, they're also called service factories. It's assembly lines of people doing very complicated knowledge work. But ultimately, you can unravel exactly what the chain is there, the chain of thought that goes into their decisions. It's very complicated, and that's why you have to have humans doing it. But you could imagine that...
Once system integrators or enterprise SaaS apps go into these places, go into these companies, they integrate the data, and then they map out the workflow. You could replace a lot of these steps in the workflow with agents.
By the way, it's not just call centers. I had a conversation with, I'm on the board of a company with the CEO the other day. And he was like, well, we're gonna hire an analyst that's gonna sit between our kind of retail sales operations and figure out what's working to drive marketing decisions. And I'm like, no, you're not.
Like, I really think that that would be a mistake because today you can use O1 and describe, just feed it the data and describe the analysis you wanna get out of that data. And within a few minutes, and I've now done this probably a dozen times in the last week with different projects internally at my company,
it gives you the entire answer that an analyst would have taken days to put together for you. And if you think about what an analyst's job has been historically is they take data and then they manipulate it.
And the big evolution in software over the last decade and a half has been tools that give that analyst leverage to do that data manipulation more quickly, like Tableau and R and all sorts of different toolkits that are out there. But now you don't even need the analyst because the analyst is the chain of thought. It's the prompting from the model.
And it's completely going to change how knowledge work is done. Everyone that owns a function no longer needs an analyst. The analyst is the model that's sitting on the computer in front of you right now. And you tell it what you want. And not days later, but minutes later, you get your answer. It's completely revolutionary in...
ad hoc knowledge work as well as kind of this repetitive structured knowledge work.
This is such a good point, Freeberg. The ad hoc piece of it, when we're processing 20,000 applications for funding a year, we do 100 plus meetings a week. The analysts on our team are now putting in the transcripts and key questions about markets, and they are getting so smart so fast.
that, you know, when somebody comes to them with a marketplace in diamonds, their understanding of the diamond marketplace becomes so rich so fast that we can evaluate companies faster than we're also seeing Chamath.
Before we call our lawyers, when we have a legal question about a document, we start putting in, you know, let's say the the standard note template or the standard safe template, we put in the new one. And there's a really cool project by Google called notebook LL, LM, where you can put in multiple documents, and you can start asking questions.
So imagine you take every single legal document sacks that Yammer had when you had Chamath as an investor, I'm not sure if he's on the board. And you can start asking questions about the documents. And we have had people make changes to these documents, and it immediately finds and explains them.
And so everybody's just getting so goddamn smart, so fast using these tools, that I insisted that every person on the team when they hit control tab, It opens a chat GPT-4 window in 01, and we burned out our credits immediately. It stopped us. It said, you have to stop using it for the rest of the month. Chamath, your thoughts on this?
We're seeing it in real time. In 80-90, what I'll tell you is what Sacks said is totally right. There's so many companies that have very complicated processes that are a combination of well-trained and well-meaning people and bad software. And what I mean by bad software is that
Some other third party came in, listened to what your business process was, and then wrote this clunky deterministic code, usually on top of some system of record, charged you tens or hundreds of millions of dollars for it, and then left and will support it only if you keep paying them millions of dollars a year.
That whole thing is so nuts because the ability for people to do work, I think, has been very much constrained. And it's constrained by people trying to do the right thing using really, really terrible software. And all of that will go away. The radical idea that I would put out there is I think that systems of record no longer exist because they don't need to.
And the reason is because all you have is data and you have a pipeline of information. Can you level set and just explain to people what system of record is? So inside of a company, you'll have a handful of systems that people would say are the single source of truth. They're the things that are used for reporting compliance. An example would be for your general ledger.
So to record your revenues, you'd use NetSuite or you'd use Oracle GL or you'd use Workday Financials. then you'd have a different system of record for all of your revenue generating activities. So who are all of the people you sell to? How are sales going? What is the pipeline? So there's companies like Salesforce or HubSpot, SugarCRM.
Then there's a system of record for all the employees that work for you, all the benefits they have, what is their salary? This is HRIS. So the point is that
The software economy over the last 20 years, and this is trillions of dollars of market cap and hundreds of billions of revenue, have been built on this premise that we will create the system of record, you will build apps on top of the system of record, and the knowledge workers will come in and that's how they will get work done. And I think that Sachs is right.
This totally flips that on its head. Instead, what will happen is people will provision an agent and roughly direct what they want the outcome to be. And they'll be process independent. They won't care how they do it. They just want the answer. So I think two things happen. The obvious thing that happens in that world is systems of record lose a grip
on the vault that they had in terms of the data that runs a company. You don't necessarily need it in the same reliance and primacy that you did five and 10 years ago. That'll have an impact to the software economy.
And the second thing that I think is even more important than that is that then the atomic size of companies changes because each company will get much more leverage from using software and few people versus lots of people with a few pieces of software. And so that inversion, I think, creates tremendous potential for operating leverage.
All right. Your thoughts, Sax. You operate in the SaaS space with System of Records and investing in these type of companies. Give us your take.
Well, it's interesting. We were having a version of this conversation last week on the pod, and I started getting texts from Benioff as he was listening to it. And then he called me, and I think he got a little bit triggered by the idea that systems of record like Salesforce are going to be obsolete in this new AI era. And he made a very compelling case to me about why that wouldn't happen.
Which is? Well, first of all, I think AI models are predictive. I mean, at the end of the day, they're predicting the next set of texts and so forth. And when it comes to like your employee list or your customer list, You just want to have a source of truth. You don't want it to be 98% accurate. You just want it to be 100% accurate.
You want to know if the federal government asks you for the tax ID numbers of your employees, you just want to be able to give it to them. If Wall Street analysts ask you for your customer list and what the gap revenue is, you just want to be able to provide that. You don't want AI models figuring it out. So you're still going to need a system of record. Furthermore,
He made the point that you still need databases, you still need enterprise security if you're dealing with enterprises, you still need compliance, you still need sharing models. There's all these aspects, all these things that have been built on top of the database that SaaS companies have been doing for 25 years. And then the final point that I think is compelling is that
Enterprise customers don't want to DIY it, right? They don't want to have to figure out how to put this together. And you can't just hand them an LLM and say, here you go. There's a lot of work that is needed in order to make these models productive.
And so at a minimum, you're going to need system integrators and consultants to come in there, connect, hold on, just connect all the enterprise data to these models, map the workflows. You have to do that now.
How is that different from how this clunky software is sold today? I mean, look, I don't want to take away from the quality of the company that Mark has built and what he's done for the cloud economy. So let's just put that aside. But I wish this is what we could have actually all been on stage and talked about. I told him that. When he was at the summit? I said that.
Because I disagree with basically every premise of those three things. Number one, systems integrators exist today to build apps on top of these things. Why do you think you have companies like Viva? How can a $20 billion plus company get built on top of Salesforce? It's because it doesn't do what it's meant to do. That's why.
In fairness, app stores are a great way to allow people to build on your platform and cover those niche cases.
The point I'm trying to make is that's no different than the economy that exists today. It's just going to transform to different groups of people, number one.
Well, by the way, he said he's willing to come on the pod and talk about this very issue. But just with you? No, no, no, no. He'll come on the pod and discuss whether AI makes SaaS obsolete. A lot of people are asking that question.
Let's talk about it next year at the summit.
Can you talk about his philanthropy first? Okay, let's get back to focus here. Let's get focused, everybody.
Love you, Mark. Who's coming to Dreamforce? Raise your hand. I want to make another point. The second point is that when you have agents, I think that we are overestimating what a system of record is. David, what you talked about is actually just an encrypted file, or it's a bunch of rows in some database, or it's in some data lake somewhere.
You don't need to spend tens or hundreds of millions of dollars to wrap your revenue in something that says it's a system of record. You don't need that actually. You can just pipe that stuff directly from Stripe into Snowflake and you can just transform it and do what you will with it and then report it.
You could do that today. It's just that- That's an interesting point.
through steak dinners and golf outings and all this stuff, we've sold CIOs this idea that you need to wrap it in something called a system of record. And all I'm saying is when you confront the total cost of that versus what the alternative that is clearly going to happen in the next five or 10 years, irrespective of whether any of us build it or not, It'll be deflationary.
You just won't be able to justify it because it's going to cost a fraction of the price.
There's probably also an aspect of this that we can't predict what is going to work with respect to data structure. So right now, all of... all of the tooling for AI is on the front end.
And we haven't yet unleashed AI on the back end, which is if you told the AI, here's all the data ingest I'm going to be doing from all these different points in my business, figure out what you want to do with all that data. The AI will eventually come up with its own data structure and data system. No, that's happening. That will look nothing like... No, no, that's already happening. Right.
And so that's nothing like what we have... today, in the same vein that we don't understand how the translation works in an LLM, we don't understand how a lot of the function works, a lot of the data structure and data architecture, we won't understand clearly, because it's going to be obfuscated by the model driving the development.
There are open source agentic frameworks that already do Freeburg what you're saying. So it's not true that it's not been done. It's already been done. Yeah, sure.
So maybe it's being done, right.
It hasn't been fully implemented to replace the system of record. There are companies, I'll give you an example of one, like Mechanical Orchard. They'll go into the most gnarliest of environments. And what they will do is they will launch these agents that observe, it's sort of what I told you guys before, the IO stream of these apps and then reconstruct everything in the middle automatically.
I don't understand why we think that there's a world where customer quality and NPS would not go sky high for a company that has some old legacy Fortran system, and now they can just pay Mechanical Orchard a few million bucks and they'll just replace it in a matter of months. It's going to happen.
Right. Yeah. That's the very interesting piece for me is I'm, you know, watching startups, you know, working on this, the AI first ones, I think are going to come to it with a totally different cost structure. The idea of paying for seats. And I mean, some of these seats are 5,000 per person per year.
You nailed it a year ago when you were like, Oh, you mentioned some company that had like flat pricing at first, by the way, when you said that, I thought this is nuts, but you're right. It actually makes a ton of sense because if you have a fixed group of people who can use this tooling to basically effectively be as productive as a company that's 10 times as big as you,
You can afford to flat price your software because you can just work backwards from what margin structure you want, and it's still meaningfully cheaper than any other alternative.
A lot of startups now are doing consumption-based pricing. So they're saying, you know, how many... How many sales calls are you doing? How many are we analyzing as opposed to how many sales executives do you have? Because when you have agents, as we're talking about, those agents are going to do a lot of the work. So we're going to see the number of people working at companies become fixed.
And I think the static team size that we're seeing at a lot of large companies is only going to continue. It's going to be down and to the right. And if you think you're going to get a high-paying job at a big tech company, And you have to beat the agent. You're going to have to beat the maestro who has five agents working for them. I think this is going to be a completely different world.
I want to get back to OpenAI with a couple of other pieces.
So let's wrap this up so we can get to the next thing. Yes, please.
Last word for you. Last word.
So look, I think that on the whole, I agree with Benioff here that there's more net new opportunity for AI companies, whether they be startups or you know, existing big companies like Salesforce that are trying to do AI, then there is disruption. I think there will be some disruption. It's very hard for us to see exactly what AI is going to look like in five or 10 years.
So I don't want to discount the possibility that there will be some disruption of existing players. But I think on the whole, there's more net new opportunity. For example, the most highly valued public software company right now in terms of ARR multiple is Palantir. And I think that's largely because the market perceives Palantir as having a big AI opportunity. What is Palantir's approach?
The first thing Palantir does when they go into a customer is they integrate with all of its systems. And they're dealing with the largest enterprises. They're dealing with the government, the Pentagon, Department of Defense. The first thing they do is go in, and integrate with all of these legacy systems. And they collect all of the data in one place. They call it creating a digital twin.
And once all the data is in one place with the right permissions and safeguards, now analysts can start working it. And that was their historical value proposition. But in addition, AI can now start working that problem. So anything that the analysts could work, now AI is going to be able to work. And so they're in an ideal position to master these new AI workflows.
So what is the point I'm making? It's just that you can't just throw an LLM at these large enterprises. You have to go in there and integrate with the existing systems. It's not about ripping out the existing systems because that's just a lot of headaches that nobody needs. It's generally an easier approach just to collect the data.
Except when the renewal comes. What happens when you have to spend a billion dollars on something? And then you're going to renegotiate. Are you going to spend a billion dollars again five years from now? It just doesn't seem very likely. There's going to be a lot of hardcore negotiations going on, Chamath.
People are going to ask for 20% off, 50% off, and people are going to have to be more competitive. That's all. I suspect Palantir's go-to-market, when they start to really scale, they'll be able to underprice a bunch of these other alternatives. And so I think that when you look at the...
impacts and pricing that all of these open source and closed source model companies have now introduced in terms of the price per token. What we've seen is just a massive step function lower, right? So it is incredibly deflationary.
So the things that sit on top are going to get priced as a function of that cost, which means it will be an order of magnitude cheaper than the stuff that it replaces, which means that a company would almost have to purposely want to keep paying tens of millions of dollars when they don't have to. They would need to make that as an explicit decision.
And I think that very few companies will be in a position to be that cavalier in five and 10 years. So you're either going to rebase the revenues of a bunch of these existing deterministic companies, or you're going to create an entire economy of new ones that have a fraction of the revenues today, but a very different profitability profile.
I just think that that's the cycle. Whenever you're dealing with a disruption as big as this current one, I think it's always tempting to think in terms of the existing pie getting disrupted and shrunk as opposed to the pie getting so big with new use cases that on the whole, the ecosystem benefits. No, no, no, I agree with that. I suspect that's what's going to happen.
No, I agree with that. My only point is that the pie can get bigger while the slices get much, much smaller.
Well, I mean, right between the two of you, I think is the truth, because what's happening is if you look at investing, it's very hard to get into these late stage companies because they don't need as much capital. Because to your point, Shamath, they when they do hit profitability with 10 or 20 people, the revenue per employee is going way up.
If you look at Google, Uber, Airbnb, and Facebook meta, they have the same number or less employees than they did three years ago, but they're all growing in that 20 to 30% a year, which means in but two to three years, each of those companies has doubled revenue per employee.
So that concept of more efficiency, and then that trickles down, Sachs, to the startup investing space where you and I are. I'm a pre-seed seed investor, you're a seed series A investor. If you don't get in in those three or four rounds, I think it's going to be really expensive, and the companies are not going to need as much money downstream.
Speaking of investing in late-stage companies, we never closed the loop on the whole open AI thing. What did we think of the fact that they're completely changing the structure of this company? They're changing it into a corporation from the nonprofit, and Sam's now getting a $10 billion stock package.
He's not in it for the money. He has health insurance tax. Yeah. Is it Congress?
I don't need money.
I've got enough money.
I just needed the health insurance. Pull the clip up, Nick. Pull the clip up. I mean, it's the funniest clip ever.
This is the Rogan clip? No, this is Congress. Watch this. This is what is in Congress.
You make a lot of money, do you?
No.
I'm paid enough for health insurance. I have no equity in OpenAI.
Really? That's interesting. You need a lawyer. I need a what? You need a lawyer or an agent. I'm doing this because I love it. That's the greatest. Look at me. Don't believe him. Can I ask you a question there, Sax?
Are you doing this venture capital where you put the money in the startups because you love it or because you're looking to get another home in a coastal city and put more jet fuel in that plane? I need an answer for the people of the sovereign state of Mississippi.
No, Louisiana. That's Senator John Kennedy from Louisiana. He's a very smart guy, actually, with a lot of you know, sort of common folk wisdom. He got that simple talk. Yeah, exactly.
He's hysterical, actually.
Yeah, he's very funny, but... He's very funny. If you listen to him, he knows how to slice and dice his opponents.
You might need to get yourself one of them fancy agents from Hollywood or an attorney from the Wilson-Sonsini Corporation to renegotiate your contract, son, because you're worth a lot more from what I can gather in your performance today than just some simple health care. And I hope you took the Blue Cross Blue Shield.
I would like to make two semi-serious observations. Let's go. Please get us back on track. I think the first is that there's going to be a lot of people that are looking at the architecture of this conversion because if it passes muster, everybody should do it. Think about this model. Let's just say that you're in a market and you start as a nonprofit.
What that really means is you pay no income tax. So for a long time, you put out a little bit of the percentage of whatever you earn, but you can now outspend and outcompete all of your competitors. And then once you win, you flip to a corporation. That's a great hack on the tax code.
And you let the donators get first bite of the apple if you do convert. Because remember, Vinod and Hoffman got all their shares on the conversion.
The other way will also work as well, because there's nothing that says you can't go in the other direction. So let's assume that you're already a for-profit company, but you're in a space with a bunch of competitors. Can't you just do this conversion in reverse, become a non-profit, Again, you pay no income tax, so now you are economically advantaged relative to your competitors.
And then when they wither and die or you can outspend them, you flip back to a for-profit again. I think the point is that there's a lot of people that are going to watch this closely. And if it's legal and it's allowed, I just don't understand why everybody wouldn't do this. Yeah, I mean, that was Elon's point as well.
The second thing, which is just more of like cultural observation is, and you brought up Elon, my comment to you guys yesterday, and I'll just make the comment today. It's a little bit disheartening to see a situation where Elon built something absolutely incredible, defied every expectation. And then had the justice system take $55 billion away from him.
His payment package you're referring to at Tesla. His payment package, the options at Tesla.
Delaware.
And then on the other side, Sam's going to pull something like this off, definitely pushing the boundaries, and he's going to make $10 billion. And I just think when you put those two things in contrast... That's not how the system should probably work, I think is what most people would say.
Freeberg, you've been a little quiet here. Any thoughts on the transaction, the nonprofit to for-profit? If you were looking at that in what you're doing, do you see a way that Ohalo could take a nonprofit status, raise a bunch of money through donations for virtuous work, then license those patents to your for-profit? Would that be advantageous to you?
And do you think this could become an absolutely zero idea? I have no idea what they're doing. I don't know how they're converting a nonprofit to a for profit. None of us have the details on this. There's there may be significant tax implications, payments they need to make. I don't think any of us know. I certainly don't. I don't know if there's actually a real benefit here.
If there is, I'm sure everyone would do it. No one's doing it. So there's probably a reason why it's difficult. I don't know.
It's been done a couple times. The Mozilla Foundation did it. We talked about that in a previous episode. Sax, you want to wrap us up here on the corporate structure? Any final thoughts? I mean, Elon put in $50 million. I think he gets the same as Sam. Don't you think he should just chip off 7% for Elon?
Not that Elon needs the money where he's asking, but I'm just wondering why Elon doesn't get the 7% and get, or, you know, if they're going to redo this.
Did Elon actually put in $50? Did he put in $50 million?
He put in $50 million is the report, right? In the nonprofit. Yeah. Hoffman put in $10.
Look, I said on a previous show that this organizational chart of open AI was ridiculously complicated and they should go clean it up. They should open up the books and straighten everyone out.
And I also said that as part of that, they could give Sam Altman a CEO option grant and they should also give Elon some fair compensation for being the seed investor who put in the first $50 million and co-founder. And what you're seeing is, well, they're kind of doing that. They're opening up the books. They're straightening out the corporate structure.
They're giving Sam his option grant, but they didn't do anything for Elon. And I'm not saying this as Elon's friend. I'm just saying that it's not really fair to basically go fix the original situation. You're making it into a for-profit. You're giving everyone shares, but the guy who puts in the original seed capital doesn't get anything. That's ridiculous.
And what they're basically saying to Elon is, if you don't like it, just sue us. I mean, that's basically what they're doing. And I said that they should go clean this up, but they should make it right with everybody. So how do you not make it right with Elon? I haven't talked to him about this, but he reacted on X saying this is really wrong. It appeared to be a surprise to him.
I doubt he knew this was coming. So the company apparently made no effort to make things right with him. And I think that that is a bit ridiculous.
If you're gonna clean this up, if you're gonna change the original purpose of this organization to being a standard for-profit company where the CEO who previously said he wasn't gonna get any compensation is now getting $10 billion of compensation, how do you do that and then not clean it up for the co-founder who put in the first $50 million? That doesn't make sense to me.
And when Reid was on our pod, he said, well, Elon's rich enough. Well, that's not a principled excuse. I mean, does Vinod ever act that way? Does Reid ever act that way? Do they ever say, well, you know, you don't need to do what's fair for me because I'm already rich? That's not a principled answer.
The argument that I heard was that Elon was given the opportunity to invest along with Reid, along with Vinod. And he declined to participate in the for-profit investing side that everyone else participated in.
Reid made that argument, and I think it's the best argument the company has. But let's think about that argument. Maybe Elon was busy that week. Maybe Elon already felt like he had put all the money that he had allocated for something like this into it because he put in a $50 million check, whereas Reid put in $10 million. We don't know what Elon was thinking at that time.
Maybe there was a crisis at Tesla and he was just really busy. The point is Elon shouldn't have been obligated to put in more money into this venture. The fact of the matter is they're refactoring the whole venture. Elon had an expectation when he put in the $50 million that this would be a nonprofit and stay a nonprofit. And they're changing that.
And if they change it, they have to make things right with him. It doesn't really matter whether he had a subsequent opportunity to invest. He wasn't obligated to make that investment. What he had an expectation of is that his $50 million would be used for a philanthropic purpose, and clearly it has not been.
Yeah. And in fairness to Vinod, he bought that incredible beachfront property and donated it to the public trust so we can all surf and have our Halloween party there. So it's all good. Thank you, Vinod, for giving us that incredible beach. I want to talk to you guys about interfaces that came up, Chamath, in your headwinds or your four pack of reasons that
you know, open AI, when you steal men, the bear case could have challenges. Obviously, we're seeing that. And it is emerging that meta is working on some AR glasses that are really impressive. Additionally, I've installed iOS 18, which is Apple intelligence that works on 15 phones and 16 phones. 18 is the iOS. Did any of you install the beta of iOS 18 yet and use Siri?
It's pretty clear with this new one that you're going to be able to talk to Siri as an LLM, like you do in ChatGPT mode, which I think means they will not make themselves dependent on ChatGPT, and they will siphon off half the searches that would have gone to ChatGPT.
So I see that as a serious... Siri's not very good, Jekyll, and you know this because when you were driving me to the airport... We tested it and it didn't work. He tries to execute this joke where he's like, hey, Siri, send Chamath Palihapitiya a message. And it was a very off-color message. I'm not going to say what it is. It was a spicy joke. And then it's like, okay, great.
Sending Linda blah, blah, blah.
He's like, no, stop, stop, stop, stop.
I was like, no, don't send that joke to her. It hallucinates and almost sends it to some woman in his contact. It would have been really damaging. It's not very good, Jason. It's not very good.
But what I will say is there are features of it where if you squint a little bit, you will see that Siri is going to be conversational. So when I was talking to it with music and, you know, you can have a conversation with it and do math like you can do with the ChatGPT version. And you have Microsoft Teams. doing that with their copilot. And now Matt is doing it at the top of each one.
So everybody's going to try to intercept the queries and the voice interface. So chat GPT, four is now up against meta, Siri, Apple and Microsoft for that interface, it's going to be challenging. But let's talk about these meta glasses here. Meta showed off the AR glasses that Nick will pull up right now. These aren't goggles. Goggles look like ski goggles.
That's what Apple is doing with their Vision Pro. Or when you see the MetaQuest, you know how those work. Those are VR with cameras that will create a version of the world. These are actual chunky sunglasses, like the ones I was wearing earlier when I was doing the bit. So these let you operate in the real world and are supposedly extremely expensive. They made a thousand prototypes.
They were letting a bunch of influencers and folks like Gary Vaynerchuk use them and they're not ready for primetime. But the way they work Freeburg is there's a wristband that will track your fingers and your wrist movement. So you could be in a conversation like we are here on the pod.
And below the desk, you could be you know, moving your arm and hand around to be doing replies to I don't know, incoming messages or whatever it is. What do you think of this AR vision of the world and meta making this progress?
Well, I think it ties in a lot to the AI discussion because I think we're really witnessing this big shift from this big transition in computing, probably the biggest transition since mobile. You know, we moved from mainframes to desktop computers. Everyone had kind of this computer on their desktop, but you used a mouse and a keyboard to control it.
To mobile, where you had a keyboard and clicking and touching on the screen to do things on it. And now to what I would call this kind of ambient computing method. And, you know, I think the big difference is control and response. In directed computing, you're kind of telling the computer what to do. You're controlling it. You're using your mouse or your keyboard to go to this website.
So you type in a website address. Then you click on the thing that you want to click on. And you kind of keep doing a series of work to get the computer to go access the information that you ultimately want to achieve your objective. But with ambient computing, you can more kind of cleanly state your objective without this kind of directive process. You can say, hey, I...
I want to have dinner in New York next Thursday at a Michelin star restaurant at 5.30. Book me something and it's done. And I think that there are kind of five core things that are needed for this to work, both in control and response. It's voice control, gesture control, and eye control are kind of the control pieces that replace, you know, mice and clicking and touching and keyboards.
And then response is audio and kind of integrated visual. which is the idea of the goggles. Voice control works. Have you guys used the OpenAI voice control system lately? I mean, it is really incredible. I had my earphones in and I was like doing this exercise. I was trying to learn something. So I told OpenAI to start quizzing me on this thing. And I just did a 30 minute walk.
And while I was walking, it was asking me quiz questions and I would answer it and tell me I was right or wrong. It was really this incredible dialogue experience. So I think the voice controls there I don't know if you guys have used Apple Vision Pro, but gesture control is here today. You can do single finger movements with Apple Vision Pro. It triggers actions. And eye control is incredible.
You look at the letters you want to have kind of spelled out or you look at the thing you want to activate and it does it. So all of the control systems for this ambient computing are there.
And then the AI enables this kind of audio response where it speaks to you and the big breakthrough that's needed that I don't think we're quite there yet, but maybe Zuck is highlighting that we're almost there and Apple Vision Pro feels like it's almost there, except it's big and bulky and expensive is integrated visual where the ambient visual interface is always there and you can kind of engage with it.
So there's this big change. I don't think that mobile handsets are gonna be around in 10 years. I don't think we're gonna have this like phone in our pocket that we're like, pressing buttons on and touching and telling it where on the browser to go to, the browser interface is gonna go away.
I think so much of how computing is done, how we integrate with data in the world and how the computer ultimately fetches that data and does stuff with it for us is gonna completely change to this ambient model.
So I'm pretty excited about this evolution, but I think that what we're seeing with Zuck, what we saw with Apple Vision Pro and all of the OpenAI demos, they all kind of converge on this very incredible shift in computing. that will kind of become this ambient system that exists everywhere all the time.
And I know folks have kind of mentioned this in the past, but I think we're really seeing it kind of all come together now with these five key things.
Chamath, any thoughts on Facebook's progress with AR and how that might impact computing and interfaces when paired with language models.
I think David's right that there's something that's going to be totally new and unexpected. So I agree with that part of what Freebrook says. I am still not sure that glasses are the perfect form factor to be ubiquitous. When you look at a phone, A phone makes complete sense for literally everybody, right? Man, woman, old, young, every race, every country of the world.
It's such a ubiquitously obvious form factor. But the thing is like that initial form factor was so different than what it replaced. Even if you looked at like flip phones versus that first generation iPhone. So I do think, Friedberg, you're right, that there's like this new way of interacting that is ready to explode onto the scene.
And I think that these guys have done a really good job with these glasses. I mean, like, I give them a lot of credit for sticking with it and iterating through it and getting it to this place. It looks meaningfully better than the Vision Pro, to be totally honest.
But I'm still not convinced that we've explored the best of our creativity in terms of the devices that we want to use with these AI models.
You need some visual interface. I think the question is, where is the visual interface? Is it in the wall?
No, but do you?
Well, when you're asking, I want to watch Chamath on Rogan. I don't just want to hear, I want to see. When I want to visualize stuff, I want to visualize it. I want to look at the food I'm buying online. I want to look at pictures of the restaurant I'm going to go to.
But how much of that time when you say those things Are you not near some screen that you can just project and broadcast that onto? I mean, I think it's probably... If the use case is I'm walking in the park and I need to watch TV at the same time, I don't think that's a real use case.
I think you're on this one wrong, Chamath, because I saw this revolution in Japan maybe 20 years ago. They got obsessed with augmented reality. There were a ton of startups right as they started getting to the mobile phones. And the use cases were really very compelling. And we're starting to see them now in education.
And when you're at dinner with a bunch of friends, how often does picking up your phone and you know, looking at a message disturb the flow? Well, people will have glasses on, they'll be going for walks, they'll be driving, they'll be at a dinner party, they'll be with their kids.
And you'll have something on like focus mode, you know, whatever the equivalent is in Apple, and a message will come in from your spouse or from your child, but you won't have to take your phone out of your pocket.
And I think once these things weigh a lot less, you're going to have four different ways to interact with your computer in your pocket, your phone, your watch, your AirPods, whatever you have in your ears and the glasses. And I bet you glasses are going to take like a third of the tasks you do. I mean, what is the point of taking out your phone and watching the Uber come to you?
But seeing that little strip that tells you the Uber is 20 minutes away, 15 minutes away, or what the gate number is. I don't have that anxiety. Well, I don't know if it's anxiety, but I just think it's ease of use.
15 minutes, 10 minutes. That's the definition.
I think it adds up. I think taking your phone out of your pocket 50 times a day.
Those are all useless notifications. The whole thing is to train yourself to realize that it'll come when it comes.
Okay, Zach, do you have any thoughts on this impressive demo or the demo that people who've seen have said is pretty darn compelling?
I think it does look pretty impressive. I mean, you can wear these Meta Orion glasses around And look like a human. I mean, you might look like Eugene Levy, but you'll still look like a semi-normal person. Whereas you can't wear the Apple Vision Pro. I mean, you can't wear that around.
What, they don't look good? You don't like them? Nick, can you please find a picture of Eugene Levy?
I mean, it seems like a major advancement, certainly compared to Apple Vision Pro. I mean, you don't hear about the Apple Vision Pros anymore at all. I mean, those things came and went. It's pretty funny. It seems to me that Meta is executing extremely well. I mean, you had the very successful cost cutting, which Wall Street liked.
Zuck published that letter, which I give him credit for, regretting the censorship that Meta did, which was at the behest of the deep state. They made huge advancements in AI. I don't think they were initially on the cutting edge of that, but they've caught up. And now they're leading the open source movement with Lama 3.2. And now it seems to me that they're ahead on augmented reality.
Ever since Zuck grew out the hair, Don't ever cut the hair. It's like Samson.
It's like Samson.
Based Zuck is the best Zuck.
He does not give a F. I want to be clear. I think these glasses are going to be successful. My only comment is that I think that when you look back 25 and 30 years from now, and say that was the killer AI device, I don't think it's going to look like something we know today. That's my only point.
And maybe it's going to be this thing that Sam Altman and Johnny Ive are baking up that's supposed to be this AI-infused iPhone killer. Maybe it's that thing. I doubt that will be a pair of glasses or a phone or a pin.
If you think about like, so take the constraints on, I don't need a keyboard because I'm not gonna be typing stuff. I don't need a normal browser interface. You could see a device come out that's almost like smaller than the palm of your hand that gives you enough of the visuals and all it is is a screen with maybe two buttons on the side. And it's all audio driven.
You put a headset in and you're basically just talking or using gesture or looking at it to kind of describe where you want things to go. And it can create an entirely new computing interface because AI does all of these incredible things with predictive text, with gesture control, with eye control, and with audio control.
And then it can just give you what you want on a screen and all you're getting is a simple interface. So Chamath, you may be right. It might be a big watch or a handheld thing that's much smaller than an iPhone. And just all it is is a screen with nothing.
I really resonate when you talk about voice only because I think there's a part of social decorum that all of these goggles and glasses violate. And I think we're going to have to decide as a society whether that's going to be okay. And then I think when you go trekking in Nepal, are you going to encounter somebody wearing AR glasses? I think the odds are pretty low.
But you do see people today with a phone. So what do they replace it with? And I think voice as a modality is... I think it's more credible that that could be used by 8 billion people.
I think social fabric's more affected by people staring at their phones all the time. You sit on a bus, you sit at a restaurant, you go to dinner with someone and they're staring at their phone. Like spouses, friends, we all deal with it where you feel like you're not getting attention from the person that you're interfacing with in the real world because they're so connected to the phone.
If we can disconnect the phone, but still take away this kind of addictive feedback loop system, but still give you this computing ability in a more ambient way that allows you to remain engaged in the physical world. I think everyone would feel a lot better about it. You could say it.
Sax hurts your feeling when he's playing chess and not paying attention. Yeah, I'll be playing chess on my AR glasses while pretending to listen to you.
You idiot. Oh, fine. He's buying them. He got version one.
One point I want to just hit on is that the reason why these glasses have a chance of working is because of AI. I mean, Facebook initially made these... That's exactly my point. That's exactly my point. Facebook made these huge investments before AI was a thing. And in a way, I think they've kind of gotten lucky because what AI gives you is voice and audio.
So you can talk to the glasses or whatever the wearable is. It can talk to you. That's the five things. Like perfect natural language. And... Computer vision allows it to understand the world around you. So whatever this device is, it can be a true personal digital assistant in the real world.
And that's the opportunity. If you guys play with Apple Vision Pro, have any of you actually used it to any extent? No, I used it for a day.
or a night when we were playing poker, and I've never used it again since.
Right, which I get, but I do think that it has these tools in it, similar to the original Macintosh had these incredible graphics editors like MacPaint and all these things that people didn't get addicted to at the time, but they became this tool that completely revolutionized everything in computing later. and fonts and so on.
But like this, I think has these tools, Apple Vision Pro, with gesture control, and the keyboard and the eye control, those aspects of that device highlight where this could all go, which is this, these systems can kind of be driven without keyboards without typing, without like, you know, moving your finger around without clicking.
I think that's the key observation. I really agree with what you just said. It's this, it's this idea that you're just you're liberated from The hunting and pecking and tapping.
You don't need to control the computer anymore. The computer now knows what you want. And then the computer can just go and do the work.
And they can respond. So now this is the behavior change that I don't think we're fully giving enough credit to. So today, part of what Jason talked about, what I called anxiety, is because of the information architecture of these apps. That is totally broken. And the reason why it's broken is when you tell an AI agent, get me the cheapest car right now to go to XYZ place.
It will go and look at Lyft and Uber and whatever. It'll provision the car, and then it'll just tell you when it's coming. And it will break this cycle that people have of having to check these apps for what is useless filler information. And when you strip a lot of that notification traffic away, I think you'll find that people start looking at each other in the face more often.
And I think that that's a net positive. So will Meta sell hundreds of millions of these things? I suspect probably. But all I'm saying is if you look backwards 30 years from now, what is the device that sells in the billions? It's probably not a form factor that we understand today.
I just want to point out like the form factor you're seeing now is going to get greatly reduced. These were some of the early Apple. I don't know if you guys remember these, but Frog Design made these crazy tablets in the 80s. that were the eventual inspiration for the iPad, you know, 25 years later, I guess. And so that's the journey we're on here right now.
This clunky, and these are not functional prototypes, obviously.
Look at the Apple Newton, dude. The Apple Newton is like perfect. Exactly, people forget about that. And then it turns out, hey, you throw away the stylus and you got an iPhone, right? And everything gets a million X better.
The other subtle thing that's happening, which I don't think we should sleep on, is that the AirPods are probably going to become much more socially acceptable to wear on a 24 by 7 basis because of this feature that allows it to become a useful hearing aid.
And I think as it starts being worn in more and more social environments, and as the form factor of that shrinks, that's when I really do think we're going to find some very novel use case, which is you know, very unobtrusive. It kind of blends into your own physical makeup as a person without it really sticking out. I think that's when you'll have a really killer feature.
But I think that the AirPods as hearing aids will also add a lot. So Meta's doing a lot. Apple's doing a lot. But I don't think we've yet seen the super killer hardware device yet.
And there was an interesting waypoint. Microsoft had the first tablets. Here's the Microsoft tablet for those of you watching. That came, you know... I don't know, this was the late 90s or early 2000s, Friedberg, if you remember it. These like incredibly bulky tablets that Bill Gates was bringing to all the events.
99, 2000, that era.
So you get a lot of false starts. They're spending, I think, close to $20 billion a year on this ARVR stuff.
Anyway, we're definitely on this path to ambient computing. I don't think this whole like, hey, you got to control a computer thing is anything my kids are going to be doing in 20 years.
This is the convergence of like three or four really interesting technological waves. All right, just dovetailing with tech jobs and the static team size, there is a report of a blue-collar boom. The tool belt generation is what Gen Z is being referred to as. A report in the Wall Street Journal reports, hey, tech jobs have dried up. We're all seeing that.
And according to Indeed, developer jobs down within 30% since February of 2020, pre-COVID, of course. If you look at layoffs that FYI, you'll see all the, you know, tech jobs that have been eliminated since 2022, over a half million of them, bunch of things at play here. And the Wall Street Journal notes that entry level tech workers are getting hit the hardest
especially all these recent college graduates. And if you look at a historical college enrollment, let's pull up that chart, Nick, you can see your undergraduate, graduate and total with the red line, we peaked at 21 million people in either graduate school or undergraduate in 2010. And that's come down to 8.6 million.
At the same time, obviously, in the last 12 years, you've had the population has grown. So this is even, you know, if it was a percentage basis would be even more dramatic. So what's behind this?
A poll of 1,000 teens this summer found that about half believe a high school degree, trade program, or two-year degree best meets their career needs, and 56% said real-world on-the-job experience is more valuable than obtaining a college degree, something you've talked about with your own personal experience, Chamath, at Waterloo. doing apprenticeships, essentially.
Your thoughts on Generation Tool Belt?
Such a positive trend. I mean, there's so many reasons why this is good. I'll just list a handful that come to the top of my mind. The first and probably the most important is that it breaks this stranglehold that the university education system has on America's kids.
We have tricked millions and millions of people into getting trillions of dollars in debt on this idea that you're learning something in university that's somehow going to give you economic stability and ideally freedom. And it has turned out for so many people to not be true. It's just so absurd and unfair that that has happened.
So if you can go and get a trade degree and live a economically productive life where you can get married and have kids and take care of your family and do all the things you want to do, that's going to put an enormous amount of pressure on higher ed. Why does it charge so much? What does it give in return? That's one thought.
The second thought, which is much more narrow, Peter Thiel has that famous saying where if you have to put the word science behind it, it's not really a thing. And what we are going to find out is that that was true for a whole bunch of things where people went to school, like political science and social science. But I always thought that computer science would be immune.
But I think he's going to be right about that, too, because You can spend $200,000 or $300,000 getting in debt to get a computer science degree, but you're probably better off learning JavaScript and learning these tools in some kind of a boot camp for far, far less and graduating in a position to make money right away. So those are just two ideas.
I think that it allows us to be a better functioning society. So I am really supportive of this trend.
Saks, your thoughts on this generation tool belt we're reading about and, you know, the sort of combination with static team size that we're seeing in technology, companies keeping the number of employees the same or trending down while they grow 30% year over year?
Oh my God, I'm like so sick of this topic of job loss or job disruption. I got in so much trouble last week. You asked a question about whether the upper middle class is going to suffer because they're all going to be put out of work by AI. And I just kind of brush it off, not because I'm advocating for that, but just because I don't think it's going to happen. Hmm.
This whole thing about job loss is so overdone. There's going to be a lot of job disruption. But in the case of coders, just as an example, I think we can say that coders, depending on who you talk to, are 10%, 20%, 30% more productive as a result of these coding assistant tools. But we still need coders. You can't automate 100% of it. And the world needs so many of them.
The need for software is unlimited. We can't hire enough of them. At Glue, by the way, shout out if you're a coder who is afraid of not being able to get a job, apply for one at Glue. Believe me, we're hiring. I just think that this is so overdone. There's going to be a lot of disruption in the knowledge worker space. Like we talked about the workflow at call centers and service factories.
There's going to be a lot of change. But at the end of the day, I think there's going to be plenty of work. for humans to do. And some of the work will be more in the blue collar space. And I agree with Jamath that this is a good thing. I think there's been perhaps an over emphasis on the idea that the only way to get ahead in life is to get like a fancy degree from one of these universities.
And we've seen that many of the universities, they're just not that great, they're overpriced, you end up graduating with a mountain of debt, and you get a degree that is, you know, maybe even far worse than computer science, this is completely worthless. So if people learn more vocational skills, if they skip college, because they have a
a proclivity to do something that doesn't need that degree, I think that's a good thing and that's healthy for the economy.
Friedberg, is this like just the pendulum swung too much and education got too expensive, spending 200K to make $50,000 a year distinctly different than our childhoods, or I'm sorry, our adolescence when we were able to go to college for 10K a year, 20K a year, graduate with some low tens of thousands in debt if you did take debt, and then your entry-level job was 50, 60, 70K coming out of college.
What are your thoughts here? Is this a value issue with college?
Well, yeah, I think the market's definitely correcting itself. I think for years, as Chamath said, there was kind of this belief that if you went to college, there was, regardless of the college, there was this outcome where you would make enough money to justify the debt you're taking on. And I think folks have woken up to the fact that that's not reality.
Again, if there was a free market, remember, most people go to college with student loans and all student loans are funded by the federal government. So the cost of education has ballooned and the underwriting criteria necessary for this free market to work has been completely destroyed because of the federal spending in the student loan program.
There's no discrimination between one school or another. You can go to Trump University Or you could go to Harvard, it doesn't matter, you still get a student loan, even if at the end of the process, you don't have a degree that's valuable. And so I think folks are now waking up to this fact and the market is correcting itself, which is good.
I'll also say that I think that there's this premium with generally mass production and industrialization of the human touch. And what I mean is, if you think about, hey, you could go to the store and buy a bunch of cheap food off of the store shelves, you could buy a bunch of Hershey's chocolate bars.
Or you can go to a Swiss chocolate here in downtown San Francisco, pay $20 for a box of handmade chocolates, you'll pay that premium for that better product. Same with clothes, there's this big trend and kind of handmade clothes and high end luxury goods spoke artisanal Artisanal, handmade. And similarly, I think that there is a premium in human service, in the partnership with a human.
It's not just about blue collar jobs. It's about having a waiter talk to you and serve you. If you go to a restaurant, instead of having a machine spit out the food to you, there's an experience associated with that that you'll pay a premium for. There's hundreds and hundreds of microbreweries in the United States that in aggregate outsell Budweiser and Miller and even Modelo today.
And that's because they're handcrafted by local people and there's an artisan craftsmanship. So while technology and AI are going to completely reduce the cost of a lot of things and increase the production and productivity of those things... One of the complementary consequences of that is that there will be an emerging premium for human service.
And I think that there will be an absolute burgeoning and blossoming in the salaries and the availability and demand for human service in a lot of walks of life. Certainly there's all the work at home, the electricians and the plumbers and so on, but also fitness classes, food, personal service around tutoring and learning and developing oneself.
There's going to be an incredible blossoming, I think, in human service jobs, and they don't need to have a degree in poli sci to be performed. I think that there will be a lot of people that will be very happy in that world.
How do you see the differentiation the person makes, Freeberg, in doing that job versus the agent or the AI or whatever?
Well, these are in-person human jobs. So if I want to do a fitness class, do I want to stare at the tonal?
This is what I'm asking you, yeah.
I think that there's an aspect of... Look, it's like your Laura Piana. You talk about the story of Laura Piana. Where is the vicuña coming from? How's it made? Who's involved in it? Yes, look, you're... Oh, God. Look at those. Here he goes.
Don't stop, Friedberg.
I could give you truffle flavoring out of a can, but you love the white truffles. You want to go to Italy. You want the storytelling. There's an aspect to it, right? Yes. And I think that there's an aspect of humanity that we pay a premium, that we do and will. Look, Etsy crushes. I don't know how much stuff you guys buy on Etsy. I love buying from Etsy. I love finding handmade stuff on Etsy.
I buy my underwear on Etsy.
No, you don't. Do you really?
Yes, I do.
Handcrafted? Yeah, handmade. So I think that there's an aspect of this that in a lot of walks of life. I mean, I have so many jokes right now.
They're just queuing up in my brain. I've never used that seat, but I'm going to try it now after this.
Have you guys taken music lessons lately? My kids do piano lessons, and so last year I started ducking in to do a 45-minute piano lesson with the piano teacher. There's just like a great aspect to paying for these services. It's fascinating you bring that up. Oh, here we go.
You can play the harmonica? Really?
Why do you have that? I want to play some Zach Bryan songs, and he's got a couple songs I like with a harmonica in them. So I just got a harmonica. My daughter and I have been playing harmonica, yeah. Are you teaching yourself?
Let's hear it.
Let's hear it. I'll play it next week. I'm deep in the laboratory.
It's not a bit.
It could be a bit.
It could be a bit. I'll write you a song next week.
Be a little shy. He's a little shy. No, no, I'll write a Trump song for you. I'll do the trials and tribulations of Donald Trump, and I'll do a little Bob Dylan send-up song for you.
Did you see that interview with Bob Dylan? I don't know when it was, recently, about how... Oh, and Ed Bradley, that clip?
Oh, the Ed Bradley clip, it's amazing. With Ed Bradley clip?
About magic?
Yeah, well, you know, some of those songs, I don't know how I wrote them. They just came out in... But the best part is what he says afterwards. He's like, no, but I did it once. No, but I did it once. What an incredible.
That means something.
Yeah. That's really grounding. It's really grounding.
You understand too soon there is no chance of dying. Yeah, that's an incredible clip. All right, you guys want to wrap or you want to keep talking about more stuff? We were at 90 minutes here.
Let me just tell you something. I think there's going to be a big war. I think by the time this show airs, Israel's incursion into Lebanon is going to get bigger. It's going to escalate. And by next week, we could be in a full-blown multinational war in the Middle East.
And if I am, you know, a betting man, I would bet that the odds are, you know, more than 30, 40% that this happens before the election, that this conflict in the Middle East escalates.
Thank you for bringing this up. I am not asking anybody to go listen to my interview with Rogan. But I will say this. Part of why I was so excited to go and talk to him in a long-form format was this issue of war is, I think, the existential crisis of this election and of this moment. And I really do agree with you, Freeberg.
There is a non-trivially high probability, the highest it's ever been, that we are just bumbling and sleepwalking into a really bad situation we can't walk back from. I really hope you're wrong.
And here's the situation. I really hope you're wrong. If Israel incurs further into Lebanon going after Hezbollah, And Iran ends up getting involved in a more active way. Does Russia start to provide supplies to Iran like we are supplying to Ukraine today? Does this sort of bring everyone to a line?
Just to give you a sense of the scale of what Israel could then respond with, Iran has 600,000 active duty military, another 350,000 in reserve. They have dozens of ships, they have 19 submarines, they have a 600 kilometer range missile system.
Israel has 170,000 active duty and half a million reserve personnel, 15 warships, five submarines, potentially up to 400 nuclear weapons, including a very wide range of tactical sub one kiloton nuclear weapons, small payload. You could see that if Israel starts to feel incurred upon further, they could respond in a more aggressive way with what is by far and away
the most significantly stocked arsenal and military force in the Middle East. Again, we've talked about what are these other countries going to do? What is Jordan going to do in this situation? How are the Saudis going to respond? What is Russia going to do?
Well, the Russia-Ukraine thing, meanwhile, still goes on. And we saw in our group chat, one of our friends posted But Russia basically said, any more attacks on our land, we reserve all rights, including nuclear response. That is insane.
Well, you know, so just to give you a sense- It's insane.
How are we here?
Yeah. So the nuclear bombs that were set off during World War II, I just want to show you how crazy this is. Do you see that image on the left? that all the way over on the left, that's a bunker buster. You guys remember those from Afghanistan and the damage that those bunker buster bombs caused? Hiroshima is a 15 kiloton nuclear, and you can see the size of it there on the left.
That's a zoom in of the image on the right. And the image on the right starts to show the biggest ever tested was Tsar Bomba by the Soviets. This was a 50 megaton bomb. It caused shockwaves that went around the earth three times. They could be felt as seismic shockwaves around the earth three times from this one detonation. Today, there are a lot of
0.1 to one kiloton nuclear bombs that are kind of considered these tactical nuclear weapons that kind of fall closer to between the bunker buster and the Hiroshima. And that's really where a lot of folks get concerned that if Israel or Russia or others get cornered in a way, and there's no other tactical response that that is what then gets pulled out.
Now, if someone detonates a 0.1 or one kiloton nuclear bomb, which is gonna look like a mega bunker buster, what is the other side and what's the world gonna respond with? That's how on the brink we are. And there's 12,000 nuclear weapons with an average payload of 100 kilotons around the world. The US has a large stockpile. Russia has the largest. Many of these are hair trigger alert systems.
China has the third largest. And then Israel and India and so on. It is a very concerning situation because if anyone does get pushed to the brink that has a nuclear weapon and they pull out a tactical nuke, does that mean that game is on? And that's why I'm so nervous about where this all leads to if we can't decelerate. It's very scary because you can very quickly see this thing accelerate.
I am the most... objectively scared I've ever been. And I think that people grossly underestimate how quickly this could just spin up out of control. And right now, not enough of us are spending the time to really understand why that's possible, and then also try to figure out what's the offering.
And I think it's just incredibly important that people take the time to figure out that this is a non-zero probability. And this is probably, for many of us, the first time in our lifetime where you could really say that.
Well, I think Freeberg's right that we're at the beginning stages of, I think what will soon be referred to as the third Lebanon war, The first one was in 1982. Israel went into Lebanon and occupied it until 2000. Then it went back in 2006, left after about a month, and now we're in the third war. It's hard to say exactly how much this will escalate. The IDF is exhausted after the war in Gaza.
There's significant opposition within Israel and within the armed forces to a big ground invasion of Lebanon. So far, most of the fighting has been Israel using its air superiority overwhelming firepower against southern Lebanon. And I think that if Israel makes a ground invasion, they're giving Hezbollah the war that Hezbollah wants.
I mean, Hezbollah would love for this to turn into a guerrilla war in southern Lebanon. So I think there's still some question about whether Netanyahu will do that or not. At the same time, it's also possible that Hezbollah will attack northern Israel. Nasrallah has threatened to invade the Galilee in response to what Israel is doing.
If Hezbollah and Israel are in a full-scale war with ground forces, it could be very easy for Iran to get pulled into it on Hezbollah's side. And if that happens, I think it's just inevitable that the United States will be pulled into this war. So yeah, look, I think we are drifting, and we have been drifting, into a regional war in the Middle East that...
you know, ideally would not pull in the US. I think the US should try to avoid being pulled in, but I think very likely will be pulled in if it escalates. And then meanwhile, in terms of the war in Ukraine, I mean, I've been warning about this for two and a half years, how dangerous the situation was. And that's why we should have availed ourselves of every diplomatic opportunity to make peace.
And we now know, because there's been such universal reporting, that in Istanbul, in the first month of the Ukraine war, there was an opportunity to make a deal with Russia where Ukraine would get all this territory back. It's just that Ukraine would have to agree not to be part of NATO, would have to agree to be neutral and not part of the Western military bloc that was so threatening to Russia.
The Biden administration refused to make that deal. They sent in Boris Johnson to scuttle it. They threw cold water on it. They blocked it. They told Zelensky, we'll give you all the weapons you need to fight Russia. Zelensky believed in that. It has not worked out that way. Ukraine is getting destroyed. It's very hard to get honest reporting on this from the mainstream media, but...
The sources I've read suggest that the Ukrainians are losing about 30,000 troops per month. And that's just KIA. I don't even think that's wounded, that on a bad day, they're suffering 1,200 casualties. It's more than even during that failed counteroffensive last summer that Ukraine had. During that time, they were losing about 20,000 troops a month. So the level of carnage is escalating.
Russia has more of everything, more weapons, more firepower, air superiority, and they are destroying Ukraine. And it's very clear, I think that Ukraine, it could be in the next month, it could be in the next two months, it could be in the next six months, I think they're eventually going to collapse. They're getting close to being combat incapable.
And in a way, that poses the biggest danger, because The closer Ukraine gets to collapse, the more the West is going to be tempted to intervene directly in order to save them. And that is what Zelensky was here in the U.S. doing over the past week, is arguing for direct involvement by America in the Ukraine war to save him. How did he propose this?
He said, we want to be directly admitted to NATO immediately. That was his proposal. request. And he called this the victory plan. So in other words, his plan for victory is to get America involved in the war and fighting it for him. But that is the only chance Ukraine has.
And it is possible that the Biden-Harris administration will agree to do that, or at least agree to some significant escalation. So far, I think Biden, to his credit, has resisted another Zelensky demand, which is the ability to use America's long-range missiles and British long-range missiles, the storm shadows, against Russian cities. That is what Zelensky is asking for.
Zelensky wants a major escalation of the war because that is the only thing that's going to save him, save his side and maybe even his neck personally. And so we're one mistake away from the very dangerous situation that Chamath and Freeburg have described.
If a President Biden, who is basically senile, or a President Harris agree to one of these Zelensky requests, we could very easily find ourselves in a direct war with the Russians.
The waltz into World War III. is what it should be called.
And the reason why this could happen is because we don't have a fair media that's fairly reported anything about this war. I mean, Trump is on the campaign trail making, I think, very valid points about this war, that the Ukrainian cause is doomed and that we should be seeking a peace deal and a settlement before this can spiral into World War III. That is fundamentally correct.
But the media portrays that as being pro-Russian and pro-Putin. And if you say that you want peace, you are basically on the take from Putin and Russia. That is what the media has told the American public for three years.
The definition of liberalism has always been being completely against war of any kind and being completely in favor of free speech of all kinds. That's what being a liberal means. We've lost the script. And I think that people need to understand that this is the one issue where if we get it wrong, literally nothing else matters.
And we are sleepwalking and tiptoeing into a potential massive world war.
Jeffrey Sachs said it perfectly. You don't get a second chance in the nuclear age.
All it takes is one big mistake. You do not get a second chance. And for me, I have become a single issue voter. This is the only issue to me that matters. We can sort everything else out. We can figure it all out. We can find common ground and reason. Should taxes go up? Should taxes go down? Let's figure it out. Should regulations go up? Should regulations go down? We can figure it out.
But we are fighting a potential nuclear threat on three fronts. How have we allowed this to happen? Russia, Iran, China. You cannot underestimate that when you add these kinds of risks on top of each other, something can happen here. And I don't think people really know. They're too far away from it. There are too many generations removed from it.
War is something you heard maybe your grandparents talk about now, and you just thought, okay, whatever. I lived it. It's not good.
Chamath, you're right. I mean, during the Cuban Missile Crisis, all of America was huddled around their TV sets, worried about what would happen. There is no similar concern in this day and age about the escalatory wars that are happening. There's a little bit of concern, I think, about what's happening in the Middle East.
There's virtually no concern about what's happening in Ukraine because people think it can't affect them. But it can. And one of the reasons it could affect them is because we do not have a fair debate about that issue in the US media. The media has simply caricatured any opposition to the war as being pro-Putin.
So I would say that when every pundit and every person in a position to do something about it says, you have nothing to worry about, you probably have something to worry about. And so when everybody is trying to tell you, everybody, that this is not a risk, it's probably a bigger risk than we think.
Yeah, they're protesting too much. How can you say it's not a risk?
Me think thou doth protest it too much. All right. Love you boys. Bye-bye.
We need to get merch here.