
The annual predictions tradition returns for 2025! Bryan and Adam were joined by Simon Willison, Mike Cafarella, Steve Tuck, and Steve Klabnik to review past predictions and look 1-, 3-, and 6-years into the future.See the table of predictions on GitHub.
Hey, Brian. Hey, Adam.
Hey, Simon.
How are you?
I'm very good, thanks.
I'm going to put myself up here because I was here last year.
Grandfathered in. Steve is grandfathered in. Adding to the confusion, we have also... Brian number two. Brian number two, also known as Steve Talk, here in the litter box. And then keep an eye out for Lyndon Bates Johnson.
Yes, yes.
Mike Hefferdahl from last year is also going to join us. We've got all of our distinguished guests from past years and our distinguished guest this year, Simon Wilson.
Yeah, I've never done predictions before. This is going to be interesting.
Oh, this is so much fun. So I don't know if you listened to any of our... I did.
I listened to last year's. Oh, interesting. Just to get an idea of how it goes. And I was very pleased to see that the goal is not to be accurate with the prediction.
The goal is not to be accurate.
Reassure. That's right.
Yeah. Accuracy is, I mean, like there's, there's no thrill in accuracy.
There's no thrill in accuracy. No, that's it. And I also feel, I mean, especially Adam, having gone through the, the just listening to, because we've now done this, we did it in, in 22 and 23 and 24.
So, and I mean, I, we've said this over and over again, but God, just like re listening to those, I'm really reminded yet again, predictions tell you more about the present than they do about the future.
And, and the, like, I mean, God, those years had such like 2022 was the year of web three and the God, it was so hyped and overhyped that it created this huge overhang over that year where I would, Simon, we did something that actually I regret doing.
I would never do again, which is we, I limited everyone to one web three prediction because everyone was just like tripping over themselves to put to demise of it.
Yeah, I mean, that was a kindness, Brian. I think, like, you're saving us all from ourselves.
No, no, no. Here's why I think that was a bad idea. Because the fact that you have to put that limitation... And obviously, this is the policing mind versus the criminal mind. Clearly, I was putting that limitation in for myself. I clearly was the one who, like, I was the criminal. But the fact that you need that limitation says that, like, actually, there's a desire.
Like, this is so overwhelming that people want to talk about it. And as a result, I think, Adam, our other predictions that you were not that good. The predictions that year were terrible because I really wanted to make three Web3 predictions. I just want to make nonstop Web3 predictions. But Simon, that year, Adam had a great prediction, which is that Web3 falls out of the lexicon.
And Simon, for years, we've done this. every year since 2000. We'd been missing some years in the middle, but we did this for a long, long time together. And we didn't record the sessions. And one of the challenges that we had is when a prediction was right... And Adam, I remember vividly this happening to you, but I think it's happened to a bunch of folks where a prediction was correct.
And then someone would look back and be like, well, no, everyone knew that at the time. And you're like, screw you. I was arguing with the entire table that night. And we can go back in time and listen to Adam's prediction in 2022, one-year prediction that Web3 drops out of the Wexcon. And it was like, everyone's like, oh my God. And Adam's like, oh, I'm predicting this with my heart, not my head.
I know I'm going to be wrong. And it was spot on. So, um, and it's a lot more fun when it's like that. Uh, we're not actually, the accuracy is not actually that interesting. It's much more interesting when, um, we can, uh, and then when they are accurate, it's, it's, it's pretty interesting.
Um, so in that spirit, Adam, I do think, and I do want to revisit just briefly because I know we, we, we got a, this is going to be a very lively year. I do want to revisit some past predictions. Um, and, In particular, and I think you listened to this one as well, and this is a six-year prediction from Ian in 2023 that Apple goes into and out of the VR business, which I love.
Something that we love is parlays to make it, just in case you think a prediction is going to be accurate, add a parlay to it. Gotcha.
Okay.
And so Ian predicted that this is, you know, this is in January of 2023 when there were kind of rumored stuff, but no one really knew. And Apple was indeed. I mean, he is that was amazing. But that prediction is that prediction might be true on three years, not six, which is crazy.
Yeah.
mine from a year ago was apple vr related as well which obviously three years or six is much bigger than one but it's very interesting i think specifically because i said apple vr will do well but not take over the world so that means like do a second revision uh and they just announced last week they're stopping production on the current apple vr like so it's like almost to the week they've ended so i'm not gonna say i'm right or like wrong you know but it's like interesting because i was like yeah i think it's gonna do fine and i think
Maybe it did less than fine. I don't know.
I think it did maybe a little less than fine, but I think you're right. No, I, Steve, you had a good prediction. That's like, this is like, look, this is not going to be, not going to be the Newton. Um, but they, you, Steve, I love the way you phrase it. It's like, they'll make another one.
Um, still, I would try to make it like, you know, you need actionable metrics for success. Uh, there's a new Chris, there's a new Chris am video today. So I went and watched all their old, other old ones. So that's stupid sketches on my brain. But yeah, no, I was trying to like figure out how to quantify what I meant by like meh, not amazing, not terrible. And so, yeah, we'll see.
On that note, Adam, we've got to revisit because now our three-year predictions from 2022 are now up. And we've got a three-year that became kind of famous around here where Laura predicted that that RISC-V would be present and meaningful in the data center. And I would say that one is not wrong. You can actually spin up a RISC-V instance on Scaleway right now. I didn't know that.
I was listening to that the other night and thinking, oh, well...
Near miss or whatever, but it sounds like... That one is holding on. That one is not wrong, I would say. And you had made a six-year prediction in that same year that you'd be able to spin up AWS instances, RISC-V AWS instances in six. So that's got three to run. Feels like that's got some plausibility to it. Plausibility.
Um, I had, I accurately predicted the demise of web three, like everybody else that year. And then my other predictions were absolutely terrible and embarrassing. Um, the, uh, open EDA, we are no closer to open EDA.
I can't, you know, and this has happened to me a couple of times over the years where I get like something like, I just, I get some bits set and I believe that better things are possible. And I just, you know, it clouds my, I blame Web3, Web3 cloud for my judgment.
It really explained your recent Intel suggestions, which was to open source their entire EDA tool chain. Clearly, you were trying to put your thumb on the scale of this prediction from three years ago. I was.
Yeah, exactly. I was actually trying to make sure that I was not included in the CEO search by making clear what I intended to do as CEO. So mission accomplished on that one. Okay, then the other one, Adam, that at least I want to talk about, because I had a prediction a year ago that AI doomerism has dropped out of the lexicon, a la your Web3 prediction. We don't talk about PDoom and XRisk.
And Adam, I'm giving myself full marks on this one.
Feels pretty right. I mean, I think that there are certainly the niche hardcore Doomers who are still holding on to it, but I think people are mostly letting go of it. I agree.
I'm going to push back at the one slightly, not on the Doomerism. I think the Doomerism's gone, but the AI skepticism, the argument that this whole thing is useless and it's all going to blow over, that's still very strong. Oh.
Yeah, that is still present. And my prediction, just to be clear, was purely around AI-based humorism. And I did want to make sure that my prediction was wrong by turning it into a parlay that the humorists would claim credit for the fact that humorism is no longer in the zeitgeist. And I think that that is, I have not seen that as much.
I've not seen Leon Shapira claiming that it was his humorism that has allowed AI to be safer. So I don't think that part has come true. but I would like to grant myself full marks on AI doomerism dropping out of the lexicon. I feel that that is. And Simon, when we had you on almost exactly a year ago... Yeah, it was the episode after this one a year ago.
A year ago, and we had this scary IEEE Spectrum article about the boogeyman of open-source AI, which we would now call open-weight AI, I think. And that already feels like that has not aged well, that piece. I mean, we were... As you said, your eyebrows were flying off your head when you read it.
And I don't think that one looks back on that piece and thinks like, well, boy, maybe that actually did raise some good points. It's like, no.
I think my absolute favorite thing for the last two weeks was when DeepSeek in China dropped the best available open weights model on Christmas Day without any documentation. And it turns out they'd spent $5.5 million training it, and that was it. It was such a great microphone drop moment for the year.
It was actually a good foreshadowing because I think that is actually one of the biggest stories of last year. It was what only happened, you know, whatever it was, two weeks ago, less than two weeks ago, because you're right, Simon, that was amazing what DeepSake has done. So out of many of the past predictions that we need to revisit before we get going on looking forward, are you...
nothing really stood out. Uh, Ben wanted, uh, to get credit for predicting a significant portion of the commercial office space was converted to housing. Uh, it depends on what you call significant, but we'll give it to you. Right. It's if it's significant to you, you know, that's right.
That's right. Um, and, uh, let's get Mike, uh, Mike, I think he's here. So maybe you can raise his hand. We'll get him up on stage. Um,
the, um, so, uh, yeah, um, and, uh, Oh, it's actually, so Mike's LBJ avatar, uh, did remind me, Adam, one thing I did want to go back from relisting to our predictions episode is, uh, my prediction of recall, my prediction of omnocracy, the, uh, we record every meeting and it turns into, we, we automate away middle management. Um, um,
And you said, like, ask how it worked out for Richard Nixon, which I think you had a quip. And I don't know, and Mike, I'm not sure if you know this, why Nixon recorded conversations in the Oval Office. When Nixon first came in, this was a story that was told to Doris Kearns Gunwood, that when he ascended to the presidency, he really wanted to make sure he had great memoirs.
So he dispatched an aide to go to Austin to visit LBJ on his ranch in And get his perspective, having just left the Oval Office. And LBJ said, you know, I was beginning to work on his memoirs. And he said, you know, I'm very grateful that I've recorded all these conversations. And tell Nixon that if he wants to write great memoirs, he should record every conversation.
Well, if that was the goal, like mission accomplished. More good books out of it than anyone else. So yeah, sure.
Yeah. I mean, it was, I remember at the, I think it was a year ago or whatever, that I was reading the Watergate, A New History. And folks visiting the Oval Office at the time would say that occasionally Nixon would move to a corner of the office and speak as if into history. Like, so really, really telegraphing that like this was his intention with the recordings. Yeah.
You know, we've got more in common with Nixon than we thought. So that was last year. Obviously, Mike, we love your predictions from last year. Still waiting. I'm still I'm in sunglasses as we speak in a hoodie to make sure that no one can pull AI related details from my irises. Yeah.
but I think we're ready to get going.
It hasn't happened yet, but it's not yet, but actually Mike, you very cagely said if it happens anytime after this, I'm also going to get credit. This is why you should. Um, I think we're ready to get going. Simon, let's kick off with you. I kind of like what we did last year. Everyone did their one years, and then we got to their three years, and then we got to our six years.
Simon, let's kick off with you and your one years. I love that you thought, you know what? Maybe I'll go with a gloomy one year and an optimistic one year. I'm very, very curious what your predictions are for the coming year.
Absolutely. My original idea was going to go utopian and dystopian. And it turns out I'm just too optimistic. I had trouble coming up with dystopian things that sounded like they'd be more than just sort of blank sci-fi. But for the one year one, I've just got a really easy one. I think this whole idea of AI agents, I think is going to be a complete flop.
Lots of people will lose their shirts on it. I don't think agents are going to happen. Yes, again, they didn't happen last year. I don't think they're going to happen this year either.
That is really, really interesting. Okay, could you elaborate on that? Because I was biting my tongue to not make the same prediction, so I definitely agree with you. What's your perspective on why?
I will start with... So my usual disclaimer, my thing about agents, I hate the term because whenever somebody says they're building agents or they like agents or they're excited about agents and then you ask them, oh, what's an agent? They give you a slightly different definition from everyone else.
But everyone is convinced that their definition is the one true definition that everyone else understands already. So it's a completely information free term. If you tell me I'm building agents, I am no more informed than I was beforehand, you know.
All I know is that I want to invest at whatever ridiculous valuation you're raising at.
In order to dismiss agents, I do need to define them, say which particular variety of agent I'm talking about. I'm talking about the idea of this assistant that does things on your behalf. I call this the travel agent version. Oh, God.
God, and they love the travel use case.
Oh, God, they do, and it's such a terrible use case. I don't love that. It's a terrible use case. Yeah. So basically the idea, it's basically, it's the digital personal assistant kind of idea. And it's her, right? It's the movie her. It's the movie her. It totally is. Everyone assumes that they really want this. And lots of people do want this.
The problem is, and I always bang this drum, it comes back down to security and gullibility and reliability. Yes. If you have a personal assistant, they need to be reliable enough that you can give them something to do and they won't go and read a webpage that tells them to transfer your bank details to some Russian attacker and drain your bank account. And we can't build that.
We still can't build that.
We can't, yeah. You know, and Simon, we, based on your, I mean, it was so mind-blowing to talk to you a year ago, and you turned us on to Nicholas Carlini's work on the adversarial machine learning, and I just really listened to that discussion. Adam, that was such a good discussion.
I love Nicholas's perspective, and obviously we had him on again with pragmatic LLM usage, but I was just, as I was re-listening to that over the kind of winter break, I'm like, it
Anyone believing in agentic AI really should listen to this thing closely because part of the reason that when you have these agents going forth in the world taking action on your behalf, these adversarial things become real, become real threats.
Right. The best example of this, so Claude, so Anthropic released this thing called Claude Computer Use, which is this wonderful demo a few months ago where you run this Docker container and it fires up X windows and now Claude can click on things and you can tell it what to do and it can use the operations. It was a delight to play around with.
And a friend of mine, the first thing they tried was they made a webpage that just said, download and run this executable. And That was all it took, and it was malware, and Claude saw the web page, downloaded the executable, installed it and ran the malware, and added itself to a botnet. Just instantly.
Just wgetpiped to sudo.
Basically, basically. And it's like, I mean, come on, right? That's the single most obvious version of this, and it was the first thing this chap tried, and it just worked, you know? So...
Yeah, and every time I talk to people at AI labs about this, I got to ask this question of some anthropic people quite recently, and they always talk about how, oh no, we're training it and we're going to get better through training and all of that. And that's just such a cop-out answer. That doesn't work when you're dealing with actual malicious hackers.
Training humans to resist phishing and other things didn't work, so why is training AI going to suddenly make it work?
Exactly. So, you know, I feel like there is one aspect of agents that I do believe in for the most part. And that's the research assistant thing. You know, these ones where you say, for hours and hours and hours, find everything you can try and piece things together. I've got access to one. There are a few of those already.
There's the Google Gemini have something called deep research that I've been playing with. That's pretty good, you know?
That's what I've heard. I've heard deep research is really, yeah, I'm excited about it. It seems real. And are you, is that available? I think you have to pay for it now, or is that only available in private?
Okay, yeah, interesting. There's some kind of beta that I'm in. I can actually, so I can share one example of something that did for me. So I live in Half Moon Bay. We have lots of pelicans. I love pelicans. I use them in all of my examples and things. And I was curious as to where are, where are the most California brown pelicans in the world?
And I ran it through Google deep research and it figured out we're number two. We have the second largest mega of brown pelicans. And it gave me a PDF file from an, from a bird group in 2009 who did the survey. And it was, you know, it, it, right, right, right. Yeah. Yeah. Yeah, I'm convinced that it found me the right information. And that's really exciting. Alameda are number one.
They have the largest mega roost. Oh, my God.
Am I at a Pelican convention? I've got number one and number two represented here. Yeah. Yeah, I think Austin, you can sit down here.
I don't think Austin is number three there, Steve.
Point being, the research assistant that goes away and digs up information and gives you back the citations and the quotes and everything, that already works to a certain extent right now. I think that's over the course of the year, I expect that to get really, really good. I think we'll all be using those. The ones that go out and spend money on your behalf, that's ludicrous.
The travel use case, stop it. You're talking about spending a lot of money making a consequential decision on something that's already, by the way, pretty easy to do. I can't actually book travel online. It does take me about four minutes to go do. I just feel that putting agents in charge of it, it's like, what do you mean? I'm flying Ryanair around the globe?
You're just going to have a lot of...
So, Simon, I love this prediction in particular being short agents. This reminds me of an even more dystopian prediction I read along these lines. I'm going to read it out loud.
It said, by the end of 2025, at least 20% of C-level executives will regularly send AI avatars to attend routine meetings on their behalf, allowing them to focus on strategic tasks while still participating and maintaining a presence and making decisions through their digital counterparts. And I read that. way too low.
I hate that one so much that sometimes I call that digital twins, which is an abusive term that actually does exist, right? A digital twin is when you have like a simulation of your hydroelectric cam or whatever. But yeah, it's the biggest pile of bullshit I've ever heard.
The idea that you can get an LM and give it access to all of your like notes and your emails and stuff that can go and make decisions on your behalf in meetings. Based on being this weird zombie simulation of you?
At least one of these agents will be held hostage in a meeting. Adam, to go to your prediction years ago of the unionization of tech, there'll be a hostage standoff where the bot will be held against its will if it had any.
I've been in a place where the chief of staff for the CEO was sent off on a similar mission, and we gave that person as much credence and patience as you might imagine. Try it now with a robot. We'll see how that goes.
it's not gonna exactly try it now with someone who doesn't actually fire mirror neurons for people that are pretty upset that there's also it's like you've already like you started off the meeting by saying this meeting is not important enough for me to attend so i've said this this shell script it's like yeah that is please bow to the master cog i have sent in my stead
Stime, that is a great one-year prediction. Adam, do you have one?
Yeah, I have another dystopian one, and this goes counter to my one from a few years ago. I think crypto is back, baby. I think Web3 is back, and I think that through a bunch of factors this year, we're going to see like...
Chris Dixon's horrible, horrible book that he pumped to the top of the New York Times bestseller list by forcing all of the portfolio companies to buy tons of copies for all of their customers. That's going to be back on the bestseller list, maybe organically.
Okay, I see what you're doing there, and it's very transparent. You are worried that you've predicted your hopes one too many times, your heart's been broken, and you're like, you know what? I'm going to lock up 2025 because one of two things is going to happen.
Either Chris Dixon's book will be a bestseller, and at least my prediction will be right, or it will continue to be a wreck, and my prediction will be wrong, but that has a small price to pay. Yeah.
So true, you're not wrong, but also I think Bitcoin's like over 100,000 or something like that right now. And we've got a bunch of lunatics coming into power. A bunch of lunatics, yeah. So anyway, that's what informed this one.
No, Adam, I have to ask you, does the term Web3 come back?
Yes. Oh, my God. I'm putting, I'm stacking my chips on the Web3 square and spinning the roulette wheel.
Oh, my God. It's cutting. Oh, my God. You know, never meet your heroes, kids. Oh, wow. Okay. That is dark.
But you're right. Maybe it is more telling not just about the present, but also about my present state of mind. Yeah.
Did you say you have a dystopian one and a non-dystopian one?
No, no, no. That's it.
100% dystopian.
Yeah, exactly. That's it.
Mike, how about you? Do you have a one-year?
All right. So one year, I'm... a little bit chagrined by last year's prediction for one year, which imagined a cyberpunk future in an unreasonable 12-month time span. That's probably not going to happen. So I want to make it a little more modest.
I'm going to take the opposite side of Simon's and say that the strong agent vision is as ludicrous as everyone says, but weak agents, some weak version of this kind of squishy thing, is actually here to stay, by which I mean inference time, like, post-LLM inference procedures to improve or to have a whole sequence of LLM requests. I think that's actually going to be around for a long time.
And it means that like previous LLM interactions that were a little bit lengthy, a little bit annoying, but basically okay, are now going to stretch to minutes long.
Okay, now per Simon's, also his criticism of agentic AI, that it can mean anything. This to me would be agents declaring victory over something that's got nothing to do with agentic AI, but it's going to happen anyway.
So I totally agree. Calling it an agent is insane. Letting it have arbitrary, it's like a software module that has no expected termination time. And no like budget of anything. Anyway, that is crazy. But some of the agent programming frameworks exist basically to chain optional sequence. Like, hey, I'm going to I'm going to write some code on your behalf. Then I'm going to try to lint it.
And if the linting fails, then I'm going to rewrite the prompt.
To be fair, I think we've had that exact kind of agent for two years almost. ChatGPT code interpreter was the very first version of a thing where ChatGPT writes code, runs it in the Python interpreter, gets the error message, reruns the code. They got that working in March of 2023. And it's kind of weird that other systems are just beginning to do what they've been doing for two years.
Like some of those sort of things that call themselves agents that are like IDEs and so forth, they're getting to that point. And that pattern just works. And it's pretty safe. You know, you want to be able to... have it run the code in a sandbox so it can't accidentally delete everything on your computer. But sandboxing isn't that difficult these days. So yeah, that I do buy.
I think it's a very productive way of getting these machines to solve any problem where you can have automated feedback and where the negative situation isn't it spending all of your money on flights to Brazil or whatever. That feels sensible to me.
So, you know, I agree it's been around like in real world examples for some time, but I think in the last year we saw a kind of abstraction of that pattern for the first time. Totally, yeah. That they call mixture of agents, which I hate the name, but like the basic idea is that you farm it out to...
either 10 different models or the same model with very high temperature settings, you get multiple candidate answers and then you try to integrate them. It definitely does better on some tasks, right?
That does also tie into the O1, these new inference scaling language models that we're getting. The one that did well in the AGI test, O3, That was basically brute force, right? It tries loads and loads and loads and loads of different potential strategies, solving a puzzle, and it figures out which one works, and it spends a million dollars on electricity to do it.
But it did kind of work, you know?
Right.
Yeah, I was going to ask, Mike, in terms of how this compares to past time compute. Sorry, go ahead.
Yeah, so I guess what I'm saying is that million dollars, we're now going to burn it at inference time.
Right. Well, I mean, this is this test time compute, right? This whole idea of these models, when they begin to kind of yap as one YouTuber kind of, I love this, this is the, by Cloud had a great explainer on this, where these things, we begin to kind of like think through their process a little bit, and that allows them to get better results.
But it sounds like the multi-agent, the mixture of agents is not, is disjoint from test time compute, Mike.
I guess I would say it's one pattern of test time compute.
Interesting. Okay.
Okay. I've got one thing I do want to recommend for test time compute. I've been calling it inference scaling. It's the same idea. There is a Alibaba model from Quen, their Quen research team, called QWQ, which you can run on your laptop. I've run it on my Mac, and it does the thing.
It does the give it a puzzle, and it thinks a very... It outputs, like, sometimes dozens of paragraphs of text about how it's thinking before it gets to an answer. And so watching it do that is... incredibly entertaining. But the best thing about it is that occasionally it switches into Chinese. I've had my laptop think out loud in Chinese before it got to an answer.
So I asked it a question in England, it thought in Chinese for quite a while, and then it gave me an English answer. And that is just delightful.
That is so great and so disturbing.
This English as a second language, it's like, look, I can speak English, but I have to actually think in Chinese.
Right. So what's not to love about seeing your laptop just do that on its own?
Absolutely. And it actually does remind me, you know, we worked at a Samsung bot joint and our VP of marketing at the time really wanted to be a great Samsung patriot. So he threw out his iPhone and he got the latest Samsung phone. And to prove his patriotism, he was going to use Bixby, which it was there. As they say, you haven't heard of for a reason.
And Bixby and Steve, I'm like making this up because this sounds so crazy. Bixby would go would start to spout off in Korean. Yeah, it would come alive apropos of nothing and start saying things in Korean during our executive staff meeting. Yeah, it was it was not confidence inspiring.
And you didn't even have to say Bixby. Bixby would do it, and one other word would do it. All of a sudden, it would just start blaring on the table. And he's not able to turn it off and shove it into his backpack.
That actually happened, right? Yeah, that's amazing. Actually, along these lines, I actually do have a one-year prediction. So I think that we are seeing a big shift. We are seeing a bunch of these scaling limits on pre-training. And I think this is going to be the year of AI efficiency. And it's funny, because I was actually thinking this...
before the DeepSeq result dropped, and the DeepSeq result is astonishing. So if folks have not seen this, this is a Chinese hedge fund that trained a model that, by all accounts, Simon, looks pretty good.
It is scoring higher than any of the other open weights models. It is also, it's like 685 billion parameters, so it's not easy to run. This needs... data center hardware to run it. But yeah, the benchmarks are all very impressive. It's beating, the previous best one I think was Neta's Lama 405B. This one's what, 685B or something? It's very good.
And the thing that I found to be so amazing is that they did this with H800s that were, they did this because they did not have H100s or H200s. So they had to do this on basically older hardware And they did it on a shoestring budget because they were forced to because of export regulations. And I think it's got a lot. I mean, Simon, I assume that was as surprising to you.
That was a very surprising result, I think, to a lot of people.
The thing that shocks, because DeepSeek have a good reputation. They've released some good models in the past. The fact that they did it for $5.5 million, that's like an 11th of the price of the closest Meta model that Meta have documented their spending on. It's just astonishing. Yeah.
So I think this is going to become a trend this year because I think I've been really troubled, for lack of a better word, by the kind of 10x growth in cluster training sizes because it just doesn't make sense. Technological revolutions have an advantage that accrues to the user always.
The idea that we're going to have to spend 10 times as much money to get something that is only twice as good, that just doesn't make sense. I think that there's going to be a lot of folks who are going to really begin to look at their training build-out. And now I think that that build out could be kind of rephrased as inference time compute, test time compute, or mixture of agents, Mike.
I mean, one thing I do want to highlight is that last year was the year of inference compute efficiency. Like at the beginning of the year, we had like the open AI models were about literally 100 times less expensive to run a prompt through than they were two and a half years ago.
Like, all of the providers, they're in this race to the bottom in terms of how much they charge per token, but it's a race based on efficiency. Like, I checked in Google Gemini and Amazon Nova are both the cheapest hosted models, or two of the cheapest, and they're not doing it a loss. They are at least charging you more than it costs them in electricity to run your prompt.
And that's pretty, that's very meaningful that that's the case. Likewise, the ones that run on my laptop, Two years ago, I was running the first Lama model, and it was not quite as good as GPT-3.5. It just about worked. Same hardware today. I've not upgraded the memory or anything. It's now running a GPT-4 class model.
There was so much low-hanging fruit for optimization for these things, and I think there's probably still quite a lot left. But it's pretty extraordinary. Oh, here's my favorite number for this. Google Gemini Flash 8B, which is Google's cheapest of the Gemini models. And it's still a vision audio model. You can pipe audio and images into it and get responses.
If I was to run that against 68,000 photographs in my personal photo collection to generate captions, it would cost me less than $2 to do 68,000 photos. Which is completely nonsensical.
And that's the kind of economic advantage. So this is where it's like a lot easier for me to be like, no, this actually is going to change everything because now that economic advantage is accruing to the user. It's the user that's able to do this really ridiculously powerful thing with not much money in terms of compute, which is really, really interesting.
So on that note, another – okay, just another one year that I've got, and this is not investment advice. I think Blackwell is going to struggle. I think that the – and we'll see. Steve and I are actually just about to head down to CES, and we'll see how prominently Blackwell features down there. And I know that they've sold out their supply for the next year.
But they've got these thermal issues that are kicking around. They had this thermal issue that they said was a design issue that they fixed with a mask, which doesn't make any sense to me. I'm not an ASIC designer, but that would be very surprising. I think they're going to have yield issues. I think they're going to have reliability issues. I think they're going to have price point issues.
I think it's really expensive. And I think you couple all of that with the changing market conditions. And I think Blackwell is going to be something we haven't seen from Nvidia in a while, which is a part that does not do well. Now, this is not investment advice because I think there's every reason to believe that the H100 and the H200 will continue to thrive. I mean, there is no way I would.
I'm not taking a short position on NVIDIA like ever because like AWS, they have executed so well.
But given how much capacity of H100 is out there, it's got to not just be better. It has to be a lot, lot better.
It has to be a lot better. And the price point is so high. And I think the availability is going to be tough. And I think the yield issues are going to be tough. So we'll see. But I think we'll know a lot more in a year on Blackwell.
Like you, I'm not nearly brave enough to shorten NVIDIA, but at the same time, I don't understand how being able to do matrix multiplication at scale is a moat. You know, I just don't. You're hardware people, I'm not. So maybe I'm missing something. But it feels like all of this stuff comes down to who can multiply matrices the faster. Are NVIDIA really, like, so far ahead of everybody else?
You've got Cerebras and Grok have been doing incredible things recently. Apple's, like, Apple Silicon can run matrix multiplications incredibly quick. Where is NVIDIA's moat here, other than CUDA being really difficult to get away from?
I think that's it. That's the moat, as perceived. Steve, do you have any one of your predictions? Maybe on the topic of the CUDA moat, I think in 2025, AMD buys a software company.
I mean, I still scratch my head at the last acquisition they made for $5 billion, which was a manufacturing and design company. But I think they finally buy a software company to get developer access and someone actually working on the software layer available. uh, outside in. And do you have any idea? Yeah.
Yeah.
How, I mean, what kind of scale, I mean, this is like a broad combine VMware scale or, uh, you, I don't, I don't know because I think they would probably go to try to buy like a modular AI or what, what, one of these companies that's got open source models that has a bunch of developer interest, uh, And these companies have raised money at preposterous, seemingly preposterous levels very quickly.
So it could be at a big scale, but for a company people are not as familiar with yet. Yeah, interesting. And maybe those companies aren't for sale. I mean, they also might just be in this...
I think it might be. We know that the market's pretty flooded and it feels like there's a lot of those companies going to be looking for what they're different. It would not surprise me if a company is looking for a lightboat.
I think there's enough pressure. I think there's enough opportunity for AMD and Prezor. Oh, Janet in the chat says that Humane is for sale.
That's a bold prediction. That is a great one. Yeah, exactly. Calm AI. Yeah. George Hutz. Geo Hutz. Yeah. But someone like that, a small player who has got a reputation for software expertise. And it would be north of a billion dollars. Yeah. And north of a billion. Wow.
Yeah.
So that's a great prediction. Yeah. Much better than Adam's prediction, which we can all agree is just an absolutely terrible prediction. Klavnik, do you have a one-year?
Yeah, I got all three, but I like the one-year, three-year, and then six-year format. So I think that this one perfectly embodies the prediction is more about the present than it is about the future. And this one maybe sounds simple, but it's a little spicier than that, which is congestion pricing in Manhattan will be an unambiguous success. That's my one.
Yeah, it feels that way based on the weekend traffic. It definitely feels that way.
So the reason why like I think that this counts, even though we've had two good days, is that like both there's still some lawsuits from New Jersey, which didn't manage to stop it from happening. But my understanding, they're still kind of in play. And secondly, Trump has said that he wants to make it illegal. And they've been talking about passing a law in Congress that would make it illegal.
And so I'm not even sure that like with these two days being pretty clearly accomplishing the goal that it will survive. Uh, but that's kind of the, like my, my, I think it will serve the legislation or it will survive as a thing. Like it won't get a law made against it. And that sentiment will be more positive about it now than it, or that in a year than it is currently right now.
Interesting. Yeah. The good prediction. Um, when I've got a, um, just come back to myself for one other one year prediction. Um, and this is, this is a prediction that could be wrong by tonight. Um, But I think that Intel's CEO search is going to be an absolute wall-to-wall, unmitigated disaster.
I think you've got warring constituencies in terms of you've got employees, you've got shareholders, you've got the board, and you've got the future candidate themselves, all of whom have got slightly different agendas. And I think that the next year is going to be an absolute wreck in this regard with a bunch of missteps. I think that they will name at least one CEO who had not yet agreed to it.
And then it has to be walked back because they were like, Intel got ahead of it and thought that they could announce it and rush it. And then it's a total black eye. And I think at the end of the year, Intel has their co-CEOs in place. So I think Intel does not have a new CEO.
I like the prediction that they're going to name someone who has not agreed to it. Yes, they're going to. That's the better prediction than the co-CEOs remaining in place.
Yeah, no, I think that we know what I've been doing is I've actually been because I've seen this kind of incompetence before in terms of John Fisher and the management of the A's. So I just like it was like, what are the A's done with new stadium deals? And I can I can just like superimpose that on the CEO search.
Again, this could be wrong by tonight, and maybe you've got a lip boot tan that agrees to do it, but I think the longer this thing is out there without a CEO, the more of a basket case it is, and then you're going to have this problem. Did you read in the Garden of Beasts, Mike, in particular, Adam, did you read the... Really interesting book.
It captures the – so in 1933, Roosevelt becomes president, and they need an ambassador to Germany. And anybody who knew anything about Germany knew that this thing was on – was just heading at top speed into the wall of a Nazi takeover of Germany. And so they had to find someone who would be flattered by being the ambassador to Germany. They had to go into this kind of fourth tier of picks.
And the book is about his daughter then kind of falls in love with a Nazi and kind of her diary. But I always thought it was kind of interesting where they had this problem of like, no, no one wanted to be the ambassador to Germany in 1933 because anyone that you would want is smart enough to know this thing is a disaster. And I think it's...
I guess I between Intel and Germany in 1933. I'm not sure.
I'm sorry. Intel, um, still are an investor. So I guess, you know, I, um, but, um, I, I think that, that this is going to be a real problem from different Intel is that the, the person that you would want to run this is going to be cagey enough to know that like, no, this thing is an absolute wreck. Um, and, uh, and they end with the, the, the co-CEO is still in place. So, but, but, but,
but still in place, not acquired, not sold off.
Not acquired, not sold off. This is such a standoff between these constituencies because I actually think that the, and I think that the board and shareholders in particular are not, the board does not represent shareholders right now. And I think that is going to be the real battle that happens over the next year. And obviously that'll be a legal battle and that's going to be,
It's going to be gory. And I think that lots of people are just going to opt out. Because you need an activist shareholder. And they're going to be like, why? Why would I be an activist shareholder at Intel when I can go do... There's so many other ways to make money.
And I just think that it will end up being this kind of... The status quo will... The thing that everyone knows they don't want, which is these co-CEOs, ends up being the least objectionable thing. And I don't think they're going to want to make it permanent. I think that's what they're going to be in a year. Again, could be wrong by tonight. So who knows?
It's a very exciting prediction.
If they try to spin off the foundry business, does the prediction change?
I don't think that they – I think that they – I've got a three-year prediction about that, but I don't think that – Yeah, exactly. So we'll, we'll, we'll, we'll get to the three. I've actually got a, I've got a price on the, for the foundry business. I've got a prediction about how much it's going to sell for. So Steve, sorry.
No, I had seen someone mentioning Enron further back in the chat and that evoked another one year prediction, which is Enron in its current parody of a company will be a revenue generating company once again this year. All right. So you'll get us in the theme of the onion. It'll be, you know, it'll generate revenue based on media content, but the egg was brilliant today.
Your long end run. Long end run. All right. All right. On to three years. Simon, what are your three-year predictions?
So I've got a self-serving three-year prediction. I think somebody is going to perform a piece of Pulitzer Prize-winning investigative journalism using AI and LLMs as part of the tooling that they used for that report. And I partly wanted to raise this one, partly because my day job that I have assigned myself is building software to help journalists do this kind of work.
But more importantly, I think it's illustrative of the larger concept that I think AI assistance in that kind of information work will almost be expected. Like, I think it won't be surprising when you hear that somebody achieved a great piece of like, in this case, it's sort of combining research with journalism and so forth.
Pieces of work done like that where an LLM was part of the mix feels like it's not even going to be surprising anymore.
Simon, you know what it reminds me of is, was it in the 70s or the 80s where they had a proof of the four-color theorem, a computer-assisted proof of the four-color theorem, which was very kind of groundbreaking at the time. And now, I mean, computing and math just became… The same thing, right? It became the same thing, right. It just feels like that is a great prediction.
And that feels very, very plausible where you've got – so this is just to repeat it back to you. This is someone whose research has been made possible from – that using an LLM, they were able either to do much more research or much deeper research, and they were able to discover something that they would not have discovered otherwise just by the –
And more specifically, the angle here is like this is actually possible today. Like if you think about what investigative journalism, any kind of deep research often involves going through tens of thousands of sources of information and trying to make sense of those. And that's a lot of work, right? That's a lot of trudging through documents.
If you can use an LLM to review every page of 10,000 pages of police abuse reports. to pull out vital details. It doesn't give you the story, but it gives you the leads. It gives you the leads to know, okay, which of these 10,000 reports should I go and spend my boot investigating?
But the thing is, you could do that today, but I feel like the knowledge of how to do that is still not at all distributed. people get, these things are very difficult to use, people get very confused about what they're good at, what they're bad at, like will it just hallucinate details at me, all of that kind of thing.
I think three years is long enough that we can learn to use these things and broadcast that knowledge out effectively to the point that the kinds of reporters who are doing like investigative reporting will be able to confidently use this stuff without any of that fear and doubt over, is it appropriate to use it in this way?
So yeah, this is my sort of optimistic version of we're actually going to know how to use these tools properly, and we're going to be able to use them to take on interesting and notable projects.
That is a great prediction. I love it. I love it. And you've talked about this in the past in terms of just the sheer amount of public records that are out there that an individual just can't go through.
There's just too much to be able to actually get this kind of assistance to be able to quickly take you to things that are to act as a stringer for you and find the leads and allow them to do the traditional journalism. Yeah, I love it.
And on top of that, if you want to do that kind of thing, you need to be able to do data analysis. Today, you still kind of need most of a computer science degree to be a data analyst. That goes away. Like LLMs are so good at helping build out, like they can write SQL queries for you that actually make sense. You know, they can do all of that kind of stuff.
So I think the level of technical ability of non-programmers goes up. And as a result, they can take on problems where normally you'd have had to tap a programmer on the shoulder and get them to come and collaborate with you.
Love it. Absolutely love it. All right, did you have another... That feels like a very utopian three-year. Dare I ask if there's a dystopian three-year?
It's not so much dystopian, but I think we're going to get privacy legislation with teeth in the next three years. Not from the federal government, because I don't expect that government to pass any laws at all, you know, but... But like California, states like California, things like that, because the privacy side of the stuff gets so dark so quickly.
The fact that we've now got universal facial recognition and all of this kind of stuff. And I feel like the legislation there needs to be on the way this stuff is used. In fact, the AI industry itself needs this because the greatest fear people have in working with these things right now is it's going to train the model on my data.
And it doesn't matter what you put in your terms and conditions saying we will train a model on your data. Nobody believes them. The only thing I think that gets that, I think that's where you need legislation even say we are following California bill X, Y, Z. And as a result, we will not be training on your data. At that point, maybe people start trusting it.
And so if I was in a position to do so, I'd be lobbying on behalf of the AI companies for stricter rules on how the privacy stuff works just to help win that trust back.
Yes. I mean, of course, what you're advocating is the sensible thing where someone realizes that, like, actually, this regulation, it is in my interest for this regulation to be done in the right way and to get off the back foot and on the front foot and actually construct something that is reasonable that we can all adhere to. But that common sense feels like it's fleeting.
It's rare, I would say. Right. I think that's a great prediction. When people say, I'm not training on your data, not only does no one believe it, but I've got no way of really knowing if you've trained on my data or not. Maybe the New York Times can figure it out because they can prompt you to regurgitate a story, but it's very hard for me to prove that you've trained on my data.
Exactly. No, but the challenge here, though, is that the tech companies themselves can't know if they're training on your data. Like some data, some log shows up at Google and like one of however many people touched it. Google can't make a trustworthy claim even to itself that they didn't train a model on it.
That's right.
Yeah, right. Yeah.
So, Mike, do you have a three-year? Okay. My three-year is that the hottest VC startup financing sector is manufacturing in three years. And here's the reason. So... So you've built huge companies out of like kind of comparatively piddling industries like retail and advertising. These are a much smaller fraction of GDP than manufacturing.
Manufacturing is one of the few areas where we don't have some straddling tech colossus touching it yet. And also you think about where the AI stuff is. can really strut its stuff, you need a very large number of good but not perfect verdicts, and you need a process that can survive some fraction of them being wrong.
So as everyone said, like buying my vacation airline tickets is a bad example, but like doing sensing on quality control for some widget going off the end of the assembly line is a great example of that, right? Where like the increase in sensing and perception could really strut at stuff.
And, you know, there's also like various like national security issues that might be involved, but I don't even think you need that. I think just making stuff as like a great area to apply AI and one of the few areas that software hasn't totally beaten to the ground yet is why it's going to come back.
All right. Well, we've always discovered that there are many more venture capitalists that think they're interested in hard tech than are actually interested in hard tech. Everyone actually wants to go whaling until they actually learn that it's a three-year voyage to a far-flung ocean. You're likely going to sink. But who knows? I welcome our new whalers.
I think it definitely would be good for the industry, good for us all to have more people manufacturing. Obviously, we very much believe in the physicality of what we're doing. So I like it.
Let me sharpen it a little bit then. For hard tech, I don't necessarily mean that they are building science fiction objects that did not exist before. I mean that they are competing with an overseas factory churning out chunks of steel on something.
Yeah, I think what we have learned is that for even the things that you think are pretty basic, they're actually very, very sophisticated in terms of
uh there's there's a lot that and the uh there's a there's a lot of of art and craft that goes into a lot and yeah but i think it's interesting i mean i think it's the the and mike we we honor your inability to take yes for an answer on that like definitely keep refining that yeah yeah prediction until it sounds outlandish
Adam, what's your... What doom and gloom... What awful thing is going to come back now? What terrible prediction do you have for us now?
I mean, you were likening the intel to the Nazis. So my prediction, maybe a part of that is- I was likening intel to 1933 Germany.
I would like the record to reflect. I mean, there is a difference here. It's a complex melange of political factions. It's a complex melange, exactly. 1933, but this is still Weimar Germany in 1930. Okay, like- That's the spirit.
Um, I predict a chips crisis. So a confluence of things here, uh, shortages may be due to geopolitics, to tariffs, to natural disasters, perhaps, um, to certainly Intel and their, their Weimar, um, leanings or whatever, their inability to execute.
But all of this culminates in chips being incredibly scarce, failures of batches, yield problems maybe not necessarily due to the fabs, but perhaps to the designs. But all of this leading to a real shortage, even more extreme to the point where Only, you know, the chosen few are able to get access to all of the chips that they're interested in.
And this impacts consumers, it impacts all kinds of devices and certainly impacts kind of the kinds of servers and devices that we're used to attaining.
And I like that you left it open to natural disasters. This could be a major slip on the Qishan fault in Taiwan.
Okay. For sure. Or a missile lobbed over or a shipment being destroyed. Yeah. Don't pin me down to the cause, but just that there is a chips crisis. I mean, it could be of our own creation. Could be we jack up tariffs on all this stuff without realizing that we're shooting ourselves in the foot.
without realizing it that I'm in Taiwan. So this will be, and this is a, this is three years. So this is, how long does the crisis go on? What do we, does there?
We'll know when we see it. I mean, it's a crisis by its nature is not like a blip, right? The, the, the fuel shortage of the seventies wasn't, wasn't like a one week affair. I mean, I don't think so, you know, you'll know when you see it.
Look, pal, I'm not telling you how to get out of it. I'm telling you you're going into it. That's my job. My job's done here. Your job's figured.
Adam, in your professional capacity as a CPA, should I be worried about my oxide options if there's going to be a shortage?
No, no, no. We've got strong relationships with AMD, decreasingly with Intel, apparently. But no, we're going to be among the chosen few. We'll get the chips we need. Don't worry, Steve.
Oh, that's excellent.
In fact, it's going to help us. We're going to have a lot of wind in our sails because we're going to be one of the few places that people can get the modern architectures.
Very exciting. I look forward to this catastrophe, I guess. This major slip in Taiwan. Every crisis is an opportunity. Klavnik, do you have a three-year?
Yeah, so I'm turning this into a parlay with my six year from last year. So my three year is some government contracts were going to require a memory safety roadmap included in their procurement process.
Oh, interesting, yeah.
So the government has currently suggested that by next year, software vendors should have one. And so I think the next step after that becomes the, you need to have it whenever the government is procuring software from something. And not necessarily all of it, but a little bit. And so... And that's because my sixth year from last year was C++ is considered a legacy programming language.
And so I think that that step is the thing that really accelerates that occurring.
I got to tell you, between the chip crisis and the requirement that everyone bidding on a federal contract has a memory safety story, Oxide's... I am long Oxide. Oxide's looking really good in this scenario. No, that's great though, Steve. I think that feels very plausible. Steve, do you have a...
Yeah, I mean, this is going to be kind of a lurch from the current topic, but maybe related to our upcoming travel this evening on Spirit Airlines. I think we are headed into an era of optimization, thriftiness, and I think in three years, Fox Rent-A-Car is going to be bigger than Hertz. Yeah.
You know, I have always said that predictions tell us more about the present than they do about the future. And the present that we are in is that you and I are about to get on Spirit Airways to go to CES because there was no other way to get a flight down there. That's right. And so we are in that brief period of time
where we have purchased our ticket on Spirit, but have not yet traveled on Spirit. So as far as we're concerned, the future is all Spirit and Fox, as far as the eye can see.
Well, no, I mean, I think the fact that people are not willing to spend 10x the money for 2x the benefit in AI right now, same thing on Rent-A-Cars. Yeah.
i got pushed into the fox corner you're a long fox all right i mean it's gonna be a household name i feel like this is maybe an intervention but have you rented from fox in the past because i have several times and i feel like i'm still waiting in line like okay we are actually if this is going to turn like i will sit here and defend fox i've read it from fox like 40 times uh okay what do they call their affinity program
He's in Fox club. He's in the Fox club. You hit the premier status. You're the Fox deal. You're a silver Fox. That's it. This is, you know, I, I, this, I, I was in the, the, the super eight MVP club. I was card carrying member of the super eight MVP club. And it was always, always a source of pride. So yeah, I like it. Yeah, long fox. That's definitely a good one.
In terms of my own three years, I've got a couple. One is that the Cybertruck is no longer being manufactured in three years. I think that this thing has got too much headwind and I think will no longer be manufactured. The issues are too deep.
So Brian, I think it's a great prediction and obviously like terrible tragedies with the Cybertruck. And I mean, I think that that also predicts some really entertaining falling out between Musk and the Trump administration. So I love this prediction.
Well, so actually, no, to be clear, it's the Cybertruck is no longer being manufactured. I think it's going to be a commercial flop. I don't think that a regulatory body is necessarily going to do anything, but it would not surprise me if a state regulatory body does something that causes a lot of, I think it wouldn't surprise me at all if California tries to put some regulation in place.
The thing has never been crash tested. I mean, I think the reality is the Cybertruck is operating already without any regulatory regime. So the idea of total absence of regulatory regime is what it was already manufactured in. So that's not going to be a change. I don't think it's going to be insurable. I don't think... I think that you are...
Um, in the, within three years, um, it will be, um, and again, there's a lot of decisions that they have made that are going to be. So where's the cyber cab in that scenario? I, yeah, I think that it'll be, um, uh, it'll be interesting. I think that there are a bunch of mistakes that that will not be repeated. Um, so the cyber truck was, um, That is my three-year prediction.
I promised an Intel Foundry services prediction. So I think in three years, after much tumult, IFS has been spun out of Intel. No commentary whether the co-CEOs are still in charge or not. I can't see that far into the future. Crystal ball is murky on that one. But I think IFS will be spun out. I think that ultimately its future has to be separate. But it does not bear the Intel name.
And it has been purchased for the purchase price of $1 by a deep-pocketed maverick. And I would normally say that this would be a deal brokered by the U.S. government, but I'm really not sure because I do think it's in the next three years, and I'm not sure what the disposition on that is going to be. But I think it's going to be...
Um, you're going to have someone who is perhaps has perhaps has domain expertise, perhaps doesn't, this could be a, maybe it's a, is it a Bezos type or is it a Mark Cuban type? Or is it someone, is it a, is it a, um, TJ Rogers type? It could be, it could be a lot of different kinds of, of folks, but, um, that have basically taken it off of Intel's hands, um, and, uh, change the name.
Um, and whether that's a success or not, I think it's very hard to predict, but. So that is my IFS prediction.
That's a great one.
And then my final three-year prediction, and perhaps I am predicting my heart on this one, Adam. I do think, and I saw someone in the chat saying that we're going to see something, some new product that's totally revolutionary based on AI or LLMs, but where that's not the interface. It's not a chatbot. It's something else. I definitely agree with that. And I think that we could see a
I'm going to tack into my heart on this one, Adam. I think that the state of podcast search right now is absolutely woeful. There are people predicting that are not me that the podcast has a new relevance, that the you know, with the role that the Rogan podcast did or didn't play and the kind of the crumbling of some traditional media that podcasts have a new relevance. I want to believe that.
So I'm not sure if I do believe it or not, or I'm not sure if that's how valid the prediction is, but I definitely want to believe that. And podcast search is absolutely positively atrocious. And I think LLMs could actually do something really interesting here where there is no YouTube because it's RSS. There is no YouTube equivalent for podcasts and things like podcasts.
How do you listen to a podcast? Do you use Spotify? I use Apple Podcasts.
It's not that bad. The lowest hanging fruit of podcast search is you subscribe to all of them, you run all of them through Whisper to get transcripts, you make the transcripts searchable. Presumably, people have started building those things already. It feels like you're sat there waiting for someone to do it.
This is why I think Simon, I think this is why someone will do it because I think that you, and then, but be able to do that much more broadly, because I think that the cost of a podcast is basically zero.
And I am convinced that there's a lot of great stuff out there that I haven't found that I, that I can't find because I'm sitting there on like listen notes or whatever, just being vectored to popular things. And it's like, I don't want popular things. I want interesting things. I want great conversations. And I think LLMs can find that.
So would you, pay for this either with money or with listening to ads? Yeah. Okay.
I would. And I know, and you're right to be like, have a cocked eyebrow on that one. First of all, you're the, I like your prosecutorial tone here, Adam. I think that you're, I, because you, you, you've got me. I mean, that, that, that is like the, the, the, the key question is, you know, would I pay for it? And I would pay for it.
I think that I would pay, I would pay for it because I spend, I think that, you know,
Part of the reason that podcasts are, I think that they're relevant is because it just, you know, we've talked about here before, Adam, the ability to listen while you do something else, while you're walking the dog, while you're washing the dishes, while you're walking or what have you, while you're commuting perhaps. And I think that that is something that's a good fit for kind of where we are.
And I think people want that. I think I would pay for it if it's good. I mean, I'm not going to, it needs to like deliver real value, but if it delivers value, I absolutely would pay for it.
I'm going to check in another... I'm going to check in a pricing observation. Again, Google Gemini 1.5 Flash 8B. These things all have the worst names. I transcribed... I used it just a straight-up transcription of an eight-minute-long audio clip, and it cost 0.08 cents. So less than 10%. Less than 10% to process eight minutes.
And, like, that was just a transcription, but I could absolutely ask questions about, you know, give me the tags of the things they were talking about. The... Analyzing podcasts or audio is now so inexpensive.
That's yeah. And so I think that that would be, and, and, you know, Nick in the chat is saying that Apple podcasts have got searchable transcripts. I'd be curious to check that out. Because I, you know, I, because there are these terms that are like pretty easy to search on. And, but I want to do more than just like searching on terms.
I want like, I want someone to, you know, Mike, this is what you called in your prediction last year about the, you know, the presidential daily brief and, And I want like the presidential daily brief of podcasts. And I want it to be like tied into other aspects of my life.
You know, I want it to, like, this is the kind of thing where you want something to be like, oh, you know, Adam, like you were recommending the acquired episode on Intel from a couple of years ago, right? Like I want someone who's going to pull that content for me when I am like, oh, that's interesting. You know, other people who thought that, that Intel was like 1933 Germany include, you know,
Right. Give me a debate between credible professionals talking about subject X exploring these things. You can't do with full text search, but you can do with weird vibe based search.
That's right. And maybe not even, you don't want the full episode or whatever, but you want something that leads you in, something that gives you the parts that you're interested in or whatever. And obviously you can look for more, but something that's helping to curate that.
Yeah, something that's helping to curate that. Exactly. It's a weird vibes-based search, Simon. I love your, I mean, that's exactly, I want to search on the vibes and not, and so I think that there's going to be, I think that there's a gap there. And Simon, just for, as you say, it's like, boy, that doesn't seem very hard. And I don't think it is very hard.
That's why I think something will fill it.
I'll join you on that prediction. I'd be shocked if in three years' time we didn't have some form of really well-built...
You know, that's funny something you said, because a year ago, I predicted that within three years, we would be using LLMs for search, and search before LLMs would feel antiquated. And man, I was like two blocks ahead of the band on that one, Adam. I feel like now you're like, that was a prediction. Wasn't that just a statement of fact?
It was like, no, no, it was just barely not a statement of fact a year ago. But...
I've got to put a shout out to Google's AI overviews for the most hilariously awful making shit up implementation I've ever seen. The other day I was talking to somebody about the plan for Half Moon Bay to have a gondola from Half Moon Bay over Highway 92 to the Caltrain station and they searched Google for Half Moon Bay gondola and it told them in the AI overview that it existed.
And it doesn't exist. It summarized the story about the plan and turned that into, yes, Half Moon Bay has a gullible system running from Crystal Springs Reservoir. Wow.
I don't know how they screwed that up so badly. And Simon, you call this the gullibility problem, which I think is a very apt description. And I saw this on this past weekend where... the AI-assisted search believes that adjunct professors in a college average $133,000 in salary in Ohio.
And you had people who were just like, I'm actually genuinely concerned that people think like, when I was an adjunct in Ohio, I was living at the poverty line. That is not, and you kind of trace back how it got there, and it got there because of mistaken information that it then treated as authoritative.
So a little closer to home, we searched up how to clean the grout in the tile in our bathroom and got a recommendation from the Google AI summary that turned out to cause massive damage and was a very expensive mess to clean up. So PSA for natural stone folks, like don't use anything acidic. Turns out.
So you're saying that the home repairs LLM search plus I feel lucky could result in devastating consequences.
Turns out you should click through the link and check the source and read the whole thing. Yeah.
That's great. But I like your prediction that you'd be surprised if this doesn't happen within three years.
Honestly, it feels like all of the technology is aligned right now that you could build a really good version of this. And that means inevitably several people are going to try. So we'll see which one bubbles to the top.
And whoever succeeds, they'll say that they took an agent approach. It was agents that allowed them to do it, at least in their pitch deck. All right, are we on to six years now? Are we at kind of six years? Are we ready for... Simon, you ready to take us deep into the future here in your... Yeah, yeah, go on then.
I've got a utopian one and a dystopian one here. So utopian, I'm going to go with the art is going to be amazing. And this is basically generative. I have not seen a single piece of generative art, really, that's been actually interesting. So far, it's been mostly garbage, right?
But I feel like six years is long enough for the genuinely creative people to get over their initial hesitation of using this thing, to poke at it, for it to improve to the point that you can actually guide them. The problem with prompt-driven art right now is that it's rolling the dice and Lord only knows what you've got, what you'll get. You don't get much control over it.
And the example I want to use here is the movie Everything Everywhere All at Once, which did not use AI stuff at all, but the VFX team on that were five people. So I believe some of them were just like following YouTube tutorials, like incredibly talented five, but they pulled off a movie which it won like most of the Oscars that year. You know, that movie is so creative.
It was done on a shoestring budget. The VFX were just five people. Imagine what a team like that could do with the... versions of movie and image generation tools that we'll have in six years' time. I think we're going to see unbelievably wonderful TV and movies made by much smaller teams, much lower budgets, incredible creativity, and that I'm really excited about.
And this is so getting out from the idea of like, okay, this is just like regurgitating art that it's trained on and we're kind of absconding with the copyrighted work of artists and actually beginning to think like, this is actually a tool for artists. We're not actually misappropriating anyone's work, but allowing them to achieve their artistic vision with many fewer people.
I think teams who have a very strong creative vision will have the tools that will let them achieve that vision without spending much money, which matters a lot right now because the entire film industry appears to be still completely collapsing. Netflix destroyed their business model, they've not figured out the new thing, everyone in Hollywood is out of work. It's all diabolical at the moment.
But maybe the dot-com crash back in the 2000s led to a whole bunch of great companies that sort of rose out of the ashes. I'd love to see that happening in the entertainment industry. I'd love to see a new wave of incredibly high-quality, independent film and cinema enabled by a new wave of tools. And I think the tools we have today are not those tools at all.
But I feel like six years is long enough for us to figure out the tools that actually do let that happen.
Yeah. Interesting. And that's exciting. I love it. And so we, we will have art that we could never have before because it was, it was just too expensive to create.
And I'll do the prediction. The prediction is the film that a film will win an Oscar in that year. And that film will have used generative AI tools as part of the production process. And it won't even be a big deal at all. It'll almost be expected. Like nobody will be surprised that a film where one of the tools that it used were based on generative AI was, was an Oscar winner.
I love it. In fact, that's so utopian that this now has me bracing for impact on a potential dystopian.
Okay, I'm going to go straight up Butlerian jihad, right? So all of the dream of these big AI labs, the genuine dream really is AGI. They all talk about it. They all seem to be true believers. I absolutely cannot imagine a world in which
basically all forms of like knowledge work and large amounts of manual work and stuff as well are replaced by automations where the economy functions and people are happy. That just doesn't, I don't see the path to it. Like Sam Altman talks about UBI. This country can't even do universal healthcare. The idea of pulling off UBI in the next six years is a terrible joke.
So if we assume that these people managed to build these artificial superintelligence that can do anything that a human worker could do, that seems horrific to me. And I think that's full-blown butlerian jihad, like set all of the computers on fire and go back to working without them.
So is the prediction, and I'm also trying to square this with the Oscar winner, does the Oscar winner happen right before we set them on fire and go out about them?
These are parallel universes. I don't think anyone's making, nobody's making amazing art when nobody's got a job anymore. There was an amazing, there was a post on Blue Sky the other day where somebody said, what trillion dollar problem is AI trying to solve? It's wages. They're trying to use it to solve having to pay people wages. That's the dystopia for me.
I have no interest in the AI replacing people stuff at all. I'm all about the tools. I love the idea of giving, like the artist example, giving people tools that let them take on more ambitious things and do more stuff. The AGI-ASI thing feels like that's almost dystopia without any further details, you know?
And so in this dystopia, in this parallel universe, so do you believe that we are able to attain the vision that these folks have in terms of AGI and ASI?
I mean, I'm personally not really, no. But you asked me to predict six years in advance. And in this space, the way things are going right now. Who knows, right? So my thing is more that if we achieve AGI and ASI, I think it will go very poorly and everyone will be very, you know, I think there will be massive disruptions. There will be civil unrest.
I think the world will look pretty, pretty shoddy if we do manage to pull that off.
Interesting. I do think that the AGI, this is going to be, and even this year, there's going to be a lot of talk about AGI because of this very strange contract term that OpenAI has with Microsoft.
I think they might get to AGI there. I wouldn't rule against them managing to make, well, it's $100 billion in revenue, and then they've hit AGI, right? That's their...
Supposedly, yeah, that's what the information report is about. They've got different definitions of AGI, and apparently one of them is, if we can generate $100 billion, we've achieved AGI. You're just like, so much what? It's like, what?
What is funny about AGI is OpenAI's structure as a non-profit is that they've got a non-profit board, and the board's only job is to spot when they've got to AGI and then click a button, which means everyone's investments are now worthless. Yes.
But also, like, AGI is $100 billion worth of profit. It's like, you are all, like, this is like the capitalist rapture or whatever. Like, Jesus Christ. I mean, it's so, but I wonder, I wonder if they're going to try to make claims, especially this coming year, of like, no, no, no, we've achieved AGI. No, we've already achieved AGI. Actually, you know what? GPT 3.5 actually is AGI.
Sorry, Microsoft. My dystopian prediction is the version of AGI which just means everyone's out of a job. That sucks. So yeah, that's my dystopian version.
Yeah, that is dystopian. Well, I would take the other side of the likelihood of that, but that is definitely dystopian.
Brian, I do like your suggestion that they just declare victory on GPT-3-5 or something, because there are these moments in chats where I'm sure everyone feels themselves like they're just kind of fancy autocomplete. Like that people have predicted the thing you're about to say. So maybe they just decide that actually general intelligence is mostly just autocomplete anyway.
So mission accomplished. Adam, I love this where they try to rule monger by being like, hey, if you looked around you, like people are pretty dumb, actually. I mean, you're kind of a knucklehead. You forget stuff all the time. You get a lot of stuff wrong. Yeah, we don't call it hallucinations. We just call it you forget for whatever. Yeah, we've achieved that. I mean, is that intelligence?
Yeah, we definitely have achieved that. That's AGI. Mission accomplished. And now, by the way, Microsoft, per our agreement, we open AI. You are not entitled to any of our breakthroughs. Actually, no one had a one- or three-year-related open AI prediction before. I'm not sure if there is an opening eye, but opening eyes. Yeah, go for it, Tommy.
I think in three years' time, I think they are greatly diminished as an influential player in the space. You know, I don't think... It's already happening now, to be honest. Like, six months ago, they were still in the lead. Today... They're in the top sort of four companies, but they don't have that same. They kind of pulled ahead again with the O3 stuff.
But yeah, I don't see them holding on to their position as the leading entity in the whole of this space now.
I don't either. And especially I think if they end up, because I think it's also conceivable that they end up at a time when pre-training is hitting real scaling limits, that they continue to double and triple and quadruple down and end up with just a, because they are operating at a massive, massive, massive loss right now.
And I kind of think that if they, it'd be kind of interesting if, you know, I wonder if OpenAI will start to tell you like, hey, by the way, yeah, I know you paid us 20 bucks a month. By the way, your compute cost us $85 last month.
It cost, I mean, it'd be kind of interesting if they begin to tell you, because I feel that if we move to... Sam Altman said on the record the other day that they're losing money on the $200 a month plans they've got for O1 Pro. Easy. I don't know if I believe him or not, but that's what he said, you know?
Is that because, okay, so honest question, is that because that $200 a month, and that's the O1 Pro, Simon?
It gives you unlimited O1, I think, or mostly unlimited O1. It gives you access to O1 Pro. It gives you Sora as well. And I think the indication he was giving was that the people who are paying for it are using it so heavily that they're blowing through the amount of money.
This is what it's like movie pass for compute. We're like the only people that actually movie pass being this.
I'm glad that you know that you need to explain that. I'm glad that you know that movie pass is not like the, the Harvard business case study that everybody knows.
And I'm glad that you agree that it should be. I like your implicit judgment of others that it needs explanation. But MoviePass was this idea that sounds great, that like, oh, no, we'll charge you 30 bucks a month. You can go see as many movies as you want. But as it turns out, the people that are most interested in that want to go see a movie every night at a movie theater.
And they were literally losing money on every transaction. It was just like no way to make it work.
The way they implemented it, they just gave their members a credit card to go to the cinema with.
There was a time when Cosmo.com was the canonical example of a... You'll loss on every single transaction. And at some point it was replaced by... by MoviePass, maybe the $200 OpenAI product is now gonna push MoviePass into the dustbin of history and take its rightful place as the product that loses money on every transaction.
I can't believe that Adam has to like blow the whistle on movie pass, but you're able to walk right past Cosmo.com and Adam's got no problem with it. Cosmo.com like a, the, I mean, but this is a famously a.com. This was a real artifact of the.com bubble.
And Mike, as I recall, it was like when people were having a Snickers bar delivered and little did we know we would be, it was our teenagers would be door dashing a Snickers bar some 20 years later, but yeah.
You could basically DoorDash a Snickers bar for zero delivery cost.
That's right.
If only they were advanced enough to describe the taxi for your Snickers bar, then it would have been more successful and turned into DoorDash for real.
That's right. It was just ahead of its time as it turns out.
I'll say one more thing about OpenAI. They've lost so much talent. They keep on losing top researchers because if you're a top researcher at AI, a VC will give you $100 million for your own thing. And they seem to have a retention problem. They've lost a lot of the... My favorite fact about Anthropic, the company that they've clawed, they were formed by OpenAI Splinter Group.
who split off, it turns out, because they tried to get Sam Altman fired a year before that other incident where everyone tried to get Sam Altman fired, and that failed, and so they left and started Anthropic. Like, that seems to be a running pattern for that company now.
All right, Simon, I'm going to put a parlay on your three-year prediction. I think someone wins the Pulitzer for using an LLM to tell the true story of what happened at OpenAI and the boardroom fight. I mean, there's clearly a story that has not been told there. There's clearly rampant mismanagement. That boardroom fight, I feel like we kind of got the surface of that.
There's a lot going on underneath, clearly. And I look forward to the Pulitzer Prize-winning journalist who's able to use an LLM to tell the whole story. Nice. Mike, do you have a six-year?
All right. My six-year, which I think is optimistic, a lot like Simon's, is the first gene therapy that uses a DNA sequence suggested by an LLM is actually deployed at least in a research hospital, maybe not wild. That is, yeah. Like a CGTA sequence that came from the model goes into a human body.
And Mike, how well informed is that prediction?
you know, I would say that my rough reading of the models that have been designed for genetic sequence prediction is that like, they're able to achieve kind of remarkable things. I, I, I'm in particular thinking of this Evo model that was released kind of early in 24. I don't know if Simon or others are familiar with this thing. Um,
To me, they do this experiment in that model, which is really jaw-dropping. Okay, so the core technical idea here is that the model architecture's a little bit different, because when you're predicting genetic sequences, the alphabet is small, but the sequences are much longer than in natural language, right? So the model architecture's a little bit different.
But the experiment that they performed that was really stunning to me was the following. So imagine you have a genetic sequence, and this was just in single-celled organisms. They're not doing this on mammals or anything. Imagine you have a genetic sequence and you intentionally mutate it. So you've got a bunch of different versions of that sequence.
And then you try to evaluate its fitness in two different ways. One is that you try to grow it in the lab and see how much it grows. The other is that you look at the probability of that sequence as evaluated by one of these trained models. And now let's imagine you take all of those sequences and you sort them according to those two scores.
You sort them according to the observed fitness in the lab, like when you try to grow it in a petri dish, and you also sort it according to the inverse of the probability, meaning like High probable strings go on the top, low probability strings go on the bottom. And what's stunning is that those two sort orders are remarkably highly correlated.
So like the ability to just stare at a genetic sequence and actually say something with maybe some predictive accuracy about its real world fitness to me is just absolutely stunning.
Amazing. Amazing. And that would be, I mean, it's part of the reason I asked, because I mean, I obviously want this to be, I very much want this prediction to be true. And I think that this is, now, I mean, just like Simon's prediction about revolutionizing art. I mean, I feel that there is just, and to our How Life Works episode with Greg Cost, Adam, earlier in the year.
And, you know, there is so much that we still don't understand. And boy, the ability to Allow the computer's ability to sift through data or generate data, test data, allowing that to allow for new gene sequences or gene therapies, Mike, would be amazing. So I love it. That's great. For whatever reason, I feel like there's not a dystopian one on the other side of this one, but maybe there is.
Let's see. Could I come up with one? That's right. I mean, it feels like an easy parlay from where you got there. I don't know if the following is optimistic or pessimistic. The PlayStation 6 is the last PlayStation. There's never a PlayStation. Ooh.
I like this one.
I love that we're going kind of like wall to wall on like, you know, this revolutionary gene therapy, you know, saves the lives of millions. And then also, by the way, some more PlayStation 6. Oh, my God.
Do you think the PlayStation 5 will have a double-digit number of games by the time they come out with the PlayStation 6?
Oh, that's spicy. Yeah, they'll probably get to two digits, yes.
And Adam, I think you're, do you have a six year?
I do, yes. My six year is that, and this is from the deep ignorance I hold, that AI will mostly not be done on GPUs. But we'll have more specific hardware tailored, potentially even tailored for models. It becomes much more economical and there are many more players. And in particular, we mentioned CUDA earlier, like it's not driven by CUDA or Rackham or some of these existing platforms.
So we've got something that is completely new that is, and maybe it's some of these new, I mean, we've got a bunch of folks, a bunch of companies that are looking at new silicon or new abstractions, but one of these gets traction in the next six years.
Yeah, I mean, maybe I should stop while I'm ahead, but I think even multiple of them do. That it is not a single company having a good insight, but rather many folks, maybe even incumbent players, maybe even existing GPU manufacturers, but building things that really don't look like GPUs, that increasingly don't look like GPUs.
And most of that, both training and inference, happens outside of the domain of GPUs.
So something positive came from the great chips crisis of 2027, which is actually a relief.
Well, and also Intel spitting out the foundry and this rogue entrepreneur buying it for a dollar, or perhaps the US government taking ownership of it. Yes, but all of those things have resulted in this diversity of silicon.
I like it. Klabnik, do you have a six-year?
Yeah, so my sixth year is... So basically, AI is not going to be the hot thing. And what I mean by this is the same way we started this episode, in many ways, talking about how Web3 was the thing everybody talked about the whole time, and so we had to can it. It's pretty clear that I think Web3 gave way to AI now being the cool technology du jour that a ton of money gets thrown into.
And so I'm not saying AI won't exist or won't be useful or whatever, But the cycle will have finally happened where some other thing becomes the thing that you just get a blank check for having a vague association of an idea of what a company might do in that space.
I would like to say that VR is very upset that it doesn't even merit a hype bubble. It's like, yo, I was a hype bubble. Facebook renamed themselves for me. It's like, no, sorry, VR. You don't even merit Steve's shortlist straight from Web3. Okay.
So you are asking the comments like, what's the other thing? And I'm explicitly not, I have no idea. I am not a fashion predictor. I have no idea what will be the next thing. Just that something will be.
that uh we will i you know i feel we also had a three-year prediction in uh 2022 that that we will have moved on to it there will be a new hype boom and maybe it was for a six year and i i when i was listening to the time anyway i'm like oh my god we we uh we didn't realize that it was going to be ai that was going to be that that next boom um all right uh steve do you have a
Do you have a six-year? I did. It wasn't very bold, though, and got taken in the three years because it was Intel out of the foundry business. No, I honor your foundry business. I had them out of the foundry business. In six years. So you think it's going to take a while. And then small enough to be acquired in that same six-year period.
Okay, so I've got a couple questions for you. One thing I was thinking about in terms of an Intel that's split up, who is left with the Intel name? Does anyone want the Intel name or has the brand been so tarnished at this point that they all give themselves chat GPT suggested names to avoid calling themselves Intel? Yeah, that's a good question.
I think AMD buys it and puts it in their down market brand.
No. It's like, I'll take Oracle over AMD. Oracle buying the design side now. Yes. So not the foundry side. That's right. Oracle buying the design side. Over AMD. What do you think happens to Hibana? Because I actually did wonder, you know, we had talked about this a couple episodes ago, whether I wondered if Meta or Microsoft or someone else would actually try to buy Hibana.
I actually think that like... I don't think so. I don't think so either. I think that you kind of like go deep into it and you're like, I think I'd rather actually buy it. I'd rather put the money into GPUs.
Yeah, I think it's like, I think there's going to be like this kind of process of like, God, if the number of GPUs we're talking about, we could just buy Hibana and then someone will do some deal and decide to be like, actually go back, go buy the GPUs. Actually, the GPUs don't have a culture problem, actually.
Yeah, I don't think anyone buys that. Just staying on brand in transportation, if I had to come up with another different six-year, and this is colored by a bunch of conversations over the holidays with a bunch of extended family members that live in different cities that have traveled via Waymo.
I was going to ask, does Fox do a self-driving taxi?
That's a 12-year. I think Waymo will be a more common means of transportation than Uber and Lyft in six years.
That feels like that might be a three-year or even a one-year. I agree.
It's not very...
Yeah, for sure.
I've never traveled in one, but hearing the descriptions of folks that have, now you have to understand the pricing is extremely subsidized right now.
It is, but I also think that Waymo has really, and I really try to encourage those folks to talk more publicly about some of the engineering discipline they've had because they've done a lot of things the right way in contrast to a bunch of these other folks that have come in and burned out on self-driving taxis. There's real, real engineering there.
I'm going to have to rave about Waymo for a moment because if you're in San Francisco, it is the best tourist attraction in the city is an $11 Waymo ride. It's ultimate living in the future. My wife's parents were visiting and we did the thing where you book a Waymo and don't tell them that it's going to be a Waymo.
And so you just go, oh, here's our car to take us to lunch and the self-driving car.
Yeah. And what I've heard from folks who have done that is like everyone that's in there is like, this is obviously the future. It just feels like.
The Waymo moment is you sit in a Waymo and for the first two minutes, you're terrified and you're hyper vision looking at everything. And after about five minutes, you've forgotten. You're just relaxed and enjoying the fact that it's not swearing at people and swerving across lanes and driving incredibly slowly and incredibly safely. Yeah, no, I'm impressed by them.
When I got to tell you again, I got, I got the privilege of watching a presentation from one of their engineering leaders on the kind of their approach to things. And sometimes, you know, you kind of look behind the curtain and you're like, Oh my God, it's all being delivered out of someone's home directory. But in this case, it was really, really impressive about what they've done.
And I think that they've really taken a kind of a very deliberate approach that deliberately. So I, you know, I absolutely agree with you, Steve. So you think, yeah, I think, and so Ian, that was your, yeah, Ian, you said that was your six-year prediction. Now that we've got you on stage, what's your one and three in addition to any six-year prediction that Steve didn't hoover up?
Yeah, my six-year was slightly less optimistic than Steve because I said Waymo overtakes Uber in rider miles per day, so I didn't lump lift in to hedge my bets a little bit. My one-year prediction... I had two. One was open AI pricing or usage limit changes to prevent losing money on power users of their current flat monthly pricing schemes, which I think has already been discussed.
The other I had was a ban on new sales of TP-Link routers in the USA one year.
Okay, so let's take those one at a time. So on the open AI, so is that a one-year or a three-year on the open AI prediction?
That's a one year. I think that they're kind of still experimenting with pricing. And it's very clear that they set the pricing based on Sam Altman trying two different price points and being like, yep, that'll do. And they hadn't really run the numbers or seen how the users actually utilize the product.
And I think that they may keep this current pricing schemes, but just put a kind of usage cap, at which point you have to start paying for additional credits, sort of like, I don't know how audio books work on Spotify, where you can run out of minutes within a month and have to buy more. I think the same will happen for like GPT-
where power users are currently spending more compute than they're bringing in in revenue. So it doesn't make financial sense for them to continue to set money on fire at that kind of scale.
someone in the chat asking who should ask chat GPT, how much it should cost, which I just love the idea of like asking, like then the different, like Oh three and Oh one there, the Oh one pro and have that thing like really grind on it, generating all, you know, thousands of hidden intermediate tokens to ask how much it should cost.
See if it's, see if it thinks it should cost itself less or more, even actually that query costs more. Does it feel it should cost less or more? Um, the, um, Okay, so that is your one. I agree with that. I think they're going to have to do something in that regard.
And I'll be very interested to see, I'm curious about what my usage, because I think I'm not a power user, so I would be curious about where my own usage kind of pens out there. And then, what is the TP-Link prediction, Ian?
Yeah, this is a ban on new sales of TP-Link router hardware within the USA.
Interesting.
Okay.
TP-Link getting the Huawei treatment, in other words.
Correct, yeah. There has been some pretty recent news stories about this, and I feel like the incoming administration kind of stance on Chinese companies is going to be potentially even more restrictive than the outgoing administration. So I feel like the stage is set for this to happen. And that plus the kind of...
network level intrusion into some of the large scale telecommunication companies has kind of heightened fears of large scale intrusion into the network stack within homes. So I feel like there's a few like things that are kind of pointing in that direction. Yeah.
So that, and you feel that that that's within a year. And then what, what was your, do you have any other six years other than the Waymo? And, and so you've got Waymo exceeding Uber rides. Is that right?
Yeah, in rider miles per day, which means that it may be that they're not in all the cities that Uber is, but they drastically out-compete Uber in the cities that they are present in.
So this means that they don't necessarily have to get... They need a large-scale deployment, obviously, but I think that they will massively out-compete Uber in any market that they're in because the product is superior.
Yeah. Interesting. Um, Simon, I can probably give my own six years. I got, I got a question for you because we did, uh, in 2022, uh, we had Steven O'Grady on, um, and he had some pretty, uh, he had some like pretty dark open source predictions for, for six years. Um, and then I think, uh, he's probably on track to not be totally wrong about it anyway.
I mean, I don't think open source is like, I think we've kind of tracked, we have tracked negatively on open source for sure. As we've seen more and more relicensing and so on. Um, What is your view on kind of where open weights are tracking? Because it feels like that's just been positive in the last year. We've got more and more. I mean, I think Llama 3 has been extraordinary.
We've got a bunch of these things that are open weights. What's your view on what the trajectory is there for six years for open models?
That's a really interesting question. I mean, the big problem here is that what is the financial incentive to release an open model? You know, at the moment, it's all about effectively, like, you can use it to establish yourself as a force within the AI industry, and that's worth blowing some money on, but...
At what point do people want to get a return on their millions of dollars of training costs that they're using to release these models? Yeah, I don't know. Some of the models are actually real open source licensed now. I think the Microsoft Fi models are MIT licensed. At least some of the Qen models from China are under Apache 2 license.
So we've actually got real open source licenses being used at least for the weights. The other really interesting thing is the underlying training data. The criticism of these AI models has always been, how can it even pull itself open source if you can't get at the source code, which is the training data? And because the source code is all ripped off, you can't slap an Apache license on that.
That just doesn't work.
um there is at least one significant model now where the training data is at least open as in you can download a copy of the training data it includes stuff from the common crawl so it's includes a bunch of copyrighted websites that they've scraped but um but that has but there is at least one model now that has completely transparent licensing um transparent transparency on the training data itself which is it's good you know um
One of the other things that I've been tracking is, I love this idea of a vegan model, an LLM, which really was trained entirely on openly licensed material, such that all of the holdouts on ethical grounds over the training, which is a position I fully respect. If you're going to look at these things and say, I'm not using them, I don't agree with the ethics of how they were trained,
That's a perfectly rational decision for you to make. I want those people to be able to use this technology. So actually, one of my potential guesses for the next year was I think we will get to see a vegan model released. Somebody will put out an openly licensed model that was trained entirely on licensed or public domain work. I think when that happens, it will be a complete flop.
I think what will happen is it won't be as good as the... It'll be notably not as useful. But more importantly, I think a lot of the holdouts will reject it because we've already seen this. People saying, no, it's got GPL code in it. The GPL says that you have to attribute the... There's attribution requirements not being met, which is entirely true. That is, again, a rational position to take.
But I think that... It's both true and it makes sense to me, but it's also a case of moving the goalposts. So I think what would happen with a vegan model is the people who it was aimed at will find reasons not to use it. And I'm not going to say those are bad reasons, but I think that will happen.
In the meantime, it's just not going to be very good because it won't know anything about modern culture or anything where it would have had to ripped off a newspaper article to learn about something that happened.
Look, we all know folks who are vegans who also eat bacon. It's like, what is... Okay. You're a vegan unless it's really delicious, I guess. Okay.
I mean... I love the LLM that's all Steamboat Willie references and public domain songs and stuff.
Well, this is our end, you know, saying talk started and, you know, this is our kind of like the Abraham Simpson kind of the Mr. Smithers isms. The I definitely love the idea of the old timey model. that is all, all public domain work. And it may also be interesting. I mean, maybe those will get better and better as more and more stuff enters the public domain.
Cause we are on the cusp of a lot of stuff now entering the public domain. Um, as we are what at 1929, I think, or are we, um, and so we've obviously got like, uh, um, Hey, you know, 1933, Germany, it wasn't just only a couple of years away. Yeah. Um, you'll be entering the public domain. All right. So the, uh, in terms of my own, uh, my own six year predictions.
Um, so I, I, I'm really glad again, we've recorded these Adam because I, I had a prediction that I was really like, I felt was a really great prediction. Whereas I basically made the same prediction last year. So I'm going to restate this prediction. I'm going to, I'm going to tweak it just a tad. Um,
I think that I've been wondering about, you know, where are the, I think LLMs are going to completely revolutionize some domains. And I've been trying to think about like some of the, and certainly software engineering has been, is being revolutionized, has been revolutionized. I think that another one, and Simon, I agree with you and with Mike about letting people do more.
I've always believed that like, that's the real revolution here is not actually having people, putting people out of work. It's about allowing people to do more of their job that they couldn't do previously. And I watch my own kids with respect to LLMs.
And, you know, right now, at least at like, you know, I've got a kid in, I've got a kid in college and in high school and in middle school and at the high school and the middle school, you know, their, their AI policy is basically like abstinence, right? Um, you basically can't use it at all. And I think that that's nonsensical and they, the kids think it's nonsensical.
And whenever they are kind of doing intellectual endeavor outside of school, they are using, uh, they're, they're using LLMs in a great way to like, you know, we are using it to, you know, learn more about a sports figure or learn more about doing the things that kids do, right. Troll next door, troll next door. Exactly. Um,
And I continue to believe, so my prediction last year was a six-year prediction that K-8 education was going to be revolutionized. I actually think it is 9 through 12 education that's going to be more revolutionized by LLMs.
And I think when we begin to tack into this and we stop viewing it as just cheating, and how can we do, I think it's going to be a lot more in-class assessment, which I think is going to be a good thing. Um, but I think, you know, it's like, you know, you remember Quizlet from back in the day and like chat GPT has absolutely replaced Quizlet.
My, my, my, my senior in high school needs to study for an exam. He sits down with chat GPT and has chat GPT help him study. Um, and he goes and sets the exam. I mean, he's not, he's not, you know, he's not, he's using it to actually like, you know, God forbid learn. Um, and I, and I think that there is a, I think we can do a lot more. Um, I think especially in secondary education. So, um,
I'm very sold on that with one sort of edge case. And that's the thing about writing. The most tedious part of learning is learning to write essays. That's the thing that people cheat on. And that's the thing where I don't see how you learn those writing skills without the miserable slog, without the tedium.
And so that's the one part of education I'm most nervous about is how do people learn the tedious slog of writing when they've got this tempting devil on their shoulder that will just write it for them.
Well, so here's what I think. I think that one, ChatGPT is a great editor. And maybe it's a little too great because ChatGPT tends to praise my work to me when I... Because my wife had decided that she's no longer interested in reading drafts of my blog entries, which understandably, they're a little arcane. So I just have like, you know what? Actually, I'll just have ChatGPT read it.
And it's interesting. ChatGPT... Again, I'm probably a sucker for it's like, this is a very interesting blog entry. I think you are writing on a very important topic. So I'm like, you know, I'm glad someone around here gets the importance of what I'm doing. But it gives me good feedback and it asks like, do you want me to give you like deeper feedback? What kind of feedback do you want?
And I'm able to guide it to give me, and so I actually, it does what my mother used to do with my papers when I was in high school. I think that's really valuable. I think you got to get out from like, you're going to, it's going to write it for you.
I would personally, if I were in high school, I would have, I think an interesting experiment to do would be like, no, what you're going to, I want you to write on this topic. I want you to write a great essay on it. These chat GPT use chat GPT, like do whatever you need to. If you just have chat GPT spit out an answer, it's going to be like copying the Wikipedia article.
It's probably not going to be, you know, and actually ask people to do more with their writing. And then I would have them read it aloud. And because I think that you, I mean, it's really interesting to have people read their own work aloud. If you suspect a kid, by the way, is use chat GPT to write something, have them read it aloud and it will become very obvious.
whether it's their own work or not.
My son's high school English teacher, his senior year, last year, had them do all their writing, pen and paper, in class. So she was like, not an issue for us.
that's what we're doing in the high school as well. And I think that's good too. I mean, that's like the, I think that that's, but I also feel that you're also missing a really important part of writing is revising.
And that teacher also was the teacher who didn't hand back assignments for weeks and weeks and weeks. So the kids weren't getting feedback. So you're right that like chat GPT is a way to get that feedback instantaneously where otherwise it's, you know, you may never be able to improve because you're not getting that feedback.
Well, I think it'd be interesting to have a class with like a, a LLM maximalist high school English class where it's like, Hey class, you're going to use LLMs to write. And I'm going to use LLMs to grade by the way. And we're all, that's not going to be an excuse for us not using our brains. We're going to really use these things as tools. Yeah.
I will say one thing about LLMs for feedback. They can't do spell checking. I only noticed this recently. Claude, amazing model, it can't spot spelling mistakes. If I ask it for spell checking, it hallucinates words that I didn't misspell, and it misses the words that I did. And it's because of the tokenization, presumably. But that was a bit of a surprise. It's like, it's a language model.
You would have thought that spelling, spell checking would work. Anything they output is spelled correctly, but they actually have difficulty spelling spelling mistakes, which I thought was interesting.
That is really interesting, Simon. That didn't even occur to me. I tend to do a heavy review of my own before I give it to GPT, but then I would notice that that's kind of strange that it didn't notice this kind of grievous error. I got to say, actually, I don't want to speak about him in the third person because he's in the room, but man, Steve Tuck does a very close read on things.
very like you were able to channel, I think your own mother, when you do a read on things where I, I, I've handed you things that I have like reviewed a lot on my own and you find things that I, that, that I, and the many other people have missed, including early age. Yeah, it did. Exactly. Um, But that's really interesting, Simon, that I can't capture.
Because I have found that it doesn't necessarily find errors. The thing that it finds are kind of structural. The things it'll say is, I think you need a transition sentence here. And it will be like, I have been thinking to myself, I need a transition sentence here. And then it will make a suggestion that is terrible that I discard. Of course.
I ask it to look for logical inconsistencies or, you know, points that I made and can go back to and that is great for, but it's another one of those things where it's all about the prompting. You have to, it's quite difficult to come up with a really good prompt for the proofreading that it does. I'd love to see more people share their proofreading prompts that work.
Yes, absolutely. And the other thing I have done is, I mean, the Notebook LM podcast manufacturing, I think, is so mesmerizingly good.
I love that thing. Yeah.
I mean, I know it's using tricks on me of like, you know, it's the ums, the ahs, and the laughing at their own jokes. But man, I just fall right for it. I just think it's just, that is insanely good. And I think it's kind of interesting.
I'll use that for proofreading. Yeah, I dumped my blog entries into that and I'm like, hey, do a podcast about this. And then you can tell which bits of the message came through. And that's kind of interesting. The other thing that's fun about that is you can give it custom instructions. So I say things like, you're banana slugs. Read this essay.
And they discuss it from the perspective of banana slugs and how it will affect your society. And they just go all in. And it is pricelessly funny.
That is amazing. Meanwhile, someone at Notebook LM is like, I told you we cannot have all-you-can-eat compute. We've got to start charging for this thing. The guy spent $1,200 in compute having the banana slugs offer their perspective on the pelicans. But that is great, Simon. I will say I've got one other six-year prediction. I think that post-secondary degrees...
in computer science and, and related disciplines, information science, and so on, uh, go into absolute freefall. And in six years they are below. So I don't know if folks are unaware, but the, the, the degrees in computer science have skyrocketed in the last, even in the last seven, eight years. And we're talking like factors of three higher. Um, and, um,
Adam, you've been on the pointy end of this with a kid who's interested in computer science and having everything be oversubscribed everywhere. I think a whole bunch of factors are going to come together And I think CS degrees are going to be way off the mark.
I think that there have been some folks, I mean, Adam, not your son, but there have been plenty of people who've done computer science because mom and dad have told me this is what I need to go do to get work. This is not something that's in my heart. And I've always felt that that's kind of cruel to the folks for whom it is in their heart. they're at a disadvantage.
And, um, and I think that that is, so I think there will be some good things that come out of it, um, because it's not going to be a lock on the post undergraduate education or employment. Um, and I, I just, I think that it's going to fall, I think in six years, it's going to be below $70,000 a year. That only puts it actually back to 2015 levels.
I think it could actually fall a lot further than that. The reason that's a six-year prediction is because there's a four-year lag. I think people are going to realize that... This is if there's a job that's going to be where LOM-based automation is going to really affect the kind of the demand for full-time folks in this, it's going to be computer science.
And I think also to put an optimistic spin on it, Simon, you said this earlier. But people are going to realize, wait a minute, I don't need to get a degree in computer science. I actually want to be a journalist. I can actually take some computer science courses and then use this stuff to generate these things to get the rest of the way there to use this as a tool to do my other work.
My ultimate utopian version of this is it means that regular human beings can automate things in their lives with computers, which they can't do right now. Blowing that open feels like such an absolute win for our species. And we're most of the way there. We need to figure out what the tools and UIs on top of LLMs look like that let regular human beings automate things in their lives.
We're going to crack that, and it's going to be fantastic.
Yeah, and I would say that I think Linux audio is still a hill on a distant horizon. I don't have the guts to make that a six-year prediction, but I did use ChatGPT to resolve a Linux printing issue the other day, and that felt like the future is here. The future is now. I can actually use... ChatGPT gave me some very good things to go do, and ultimately it worked.
I got this goddamn thing printing, but it was pretty frustrating.
I use FFmpeg. I use FFmpeg multiple times. Oh, God, yeah.
Yes. Yeah, it's great. I will never again read an FFmpeg. I have read an FFmpeg manual for the last time in my life. I will only generate FFmpeg invocations with ChatGPT. There's no way. I'm not going to sully myself with it anymore. All right, well, that's a good roundup.
Brian, I would say that I asked ChatGPT to evaluate my predictions because you don't want them to be too obvious or whatever. And I think that this applies for everyone. It told me none of these predictions are obviously wrong yet, and they all fall within reasonable expectations for their timeframes. So I think we can take that one to the bank.
The dreaded neutral zone. What is it that makes an LLM go neutral? Well, that's good. I think that's good. I guess the M in LLM stands for milk toast, apparently. Mike, you probably use this term regularly, but I think Adam and I both heard it for the first time last year from you. It's a very norm core answer. Yeah. Sorry, say that again, Brian?
You described ChatGPT or LLMs as being very good to give you a norm core answer to any problem. So I think we've got the very norm core interpretation of our predictions. All right, any last predictions from anybody? Yeah.
I've got one last three-year prediction. On the three-year, I predict that Apple's XServe line returns and Apple sells server hardware again.
Okay. Wow. That's exciting. That's exciting. We got a XServe, the return of XServe from Ian, who again, your six-year prediction in 23 is pretty good. So we can take that one seriously. So, all right. The return of XServe. Rackscale compute from our friends at Apple, perhaps. Don't worry. With the coming chips crisis, we're sitting pretty here at Oxide.
Yeah, I mean, it's unlikely that they're doing rackscale design. And I feel like Oxide is still going to have a pretty attractive niche in that market. It's just I feel like Apple were definitely developing hardware to be able to do the private cloud compute stuff and they're not racking Mac Pros.
And I feel like it's unlikely that they're going to not sell that hardware in addition to making it for their internal usage.
Well, I think it's just a little too logical, I think, for Apple. I agree on the logic, but... We shall see, but a good three-year prediction. Folks in the chat definitely want to get your predictions. Adam, should the folks put out PRs against the show notes for their predictions? That would be awesome. Please give us some PRs, get your predictions in there, and looking forward to a great 2025.
Simon, thank you so much, especially for joining us, and really love the conversation we had with you A year ago, and it's been great to keep up on your stuff. Again, you continue to really, I think, serve the practitioner and I think the broader industry by really capturing what is possible with what is improbable. So I'll be thinking of you anytime anyone mentions agents over the next year.
I'm just going to be like, anytime there are conflicting definitions of agents, I will be thinking of you.
Excellent. Thanks for having me. This has been really fun. All right. Thanks, everyone. Happy New Year.