
How About Tomorrow?
How Dangerous Can AI Get, Dax is Down on DeepSeak, and AI First App Development
Mon, 03 Feb 2025
Dax is finally warming up in frigid Florida, AI isn't as dangerous as everyone thinks it is, DeepSeek is full of holes, Adam's concerned about techno signatures, and what does it mean to have AI first app development?Links:Lex Fridman PodcastAnthropicdax pranked by claudeGranolaSuperhuman Email ProductivityDax’s Remote Dev Setup videoBare Metal & Servers9950x VPS MiamiCockpit ProjectTailscalethdxr/environmenttmux/tmux WikiNeovimSyncthingAI Code EditorThe editor for what’s nextBun JavaScript RuntimeSponsor: Terminal now offers a monthly box called Cron.Want to carry on the conversation? Join us in Discord. Or send us an email at [email protected]:(00:00) - Shocking if true! (00:28) - Going for a walk without pants on (02:22) - What are the threats of AI to the world? (17:15) - Dax on DeepSeek (27:49) - Dropping Claude and assessing AI (36:31) - Techno signature follow up (42:39) - AI first app development (55:35) - Dax's remote set up video walk through ★ Support this podcast ★
You're sick of it. This is our last episode ever. We're not going to do this podcast anymore. Adam doesn't want to talk to me.
I was just on the news. I just read about a plane crash, and that's not good. Yeah, I saw it last night, and I'm getting on a plane tomorrow, so really bad timing. Oh, man. Yeah. I guess two things collided in the air. I always worry about that. You always think, like... I don't know. Could something just run into the side of us?
Cause they didn't know we were here and they didn't look at the radar or whatever. Sonar. I don't know.
Yeah. And the situation, it was kind of crazy. It was right as a plane was landing and it was a Blackhawk helicopter that was like in the air, right over the ground in the airport.
Anyway, that's a damper to start out with. How are you?
I'm good. It's finally kind of warming up again. Like I went outside with no pants on today, which is good, but I'm still wearing, you know,
a long sleeve long sleeve shirt yeah i went for a walk this morning outside at 5 30 in the morning because it was 50 degrees here which is amazing it's been so cold and it should not be 50 before 6 a.m that's unusual going outside when it's 50 degrees is uh it's really dangerous you should have seen how i was dressed probably lighter than you're dressed right now
Well, I'm going to Boston, so I have to like go and pack. Oh, no. All my like just heavy clothes from New York and my like mountaineering jacket. That's funny.
What are you doing in Boston?
Liz's friend is having an engagement party. So we're going for that. And then I'm going to visit AJ while I'm there. Nice. We're going to hang out on Friday.
That's awesome. Yeah. Tell him I said hi. Or if he's a listener. Hi, AJ. I'll just bypass Dax. He's a terrible middleman.
He is a listener. I'm sure he'll hear this.
Yeah.
Not until after we hang out. That's true.
Uh, I've been listening to a lot of stuff. Uh, we don't want to talk about AI more. Do we?
I just listened to talk about whatever you want. Good.
Okay. Whatever. I was just listening to, uh, Oh my God. Uh, just sorry. This reminded me of, uh, this reminded me of this stupid show. Casey and I've been watching on Netflix. called later daters. And it's like these 50 to 60 plus. Yeah, it's like, it's these older people, 50s and 60s dating, like divorcees, widowers, et cetera. And there's this woman who they have like a dating coach.
who seems to know stuff about relationships. And she encourages this lady to like open up conversations, like break the ice by talking about like a podcast you just listened to. But like the woman didn't quite understand. She doesn't seem to grasp the idea that you have to like actually talk about the podcast.
And she would just open all her dates with, so I was listening to this podcast with Matthew McConaughey. And that's all she would say, like that line. And like, you have to keep going. Yeah. It just made me think of it, though, when I said, I was listening to this podcast. I just wanted to break the ice with you. Yeah, pause. That's all. I was listening to the podcast.
I've been listening to all of Lex's stuff because Prime's going to be on there and I just forgot I liked his podcast. So I was listening to like some of his back catalog and he had Anthropic CEO on was super interesting.
The Anthropic CEO seems solid. Like I don't get a sketchy vibe from him. He was always trying to like, I feel like he's trying to be really practical with how he talks about all this stuff, which is pretty different for most people in this space. So yeah.
Yeah, it was very illuminating. I'm not going to try and regurgitate it because it won't be as illuminating coming out of my mouth. But you should go listen to it. He's clearly very focused on safety. And it's just fun to listen to people building these things, running companies, building these models, talk about...
the risks and like the future and how it could play out because i always hear people talk about ai safety and it's like yeah well i don't know like it's all kind of vague and fuzzy but he like talks about the specific categories of threat that they pose and like how to kind of like mitigate those things it's just super interesting what is one example that you remember because i don't know anything about this
Yeah, so he has, like, I can't remember the name of this. I think Anthropic came up with this system for categorizing the different levels of, like, threat that these models pose to society. Or level two is, like, the, like, state actors could use it to further their goals. Level three is, like, normal people could use it to, like, cause harm to humanity.
And level four is that the AI itself, along with humans, is actually a threat. So, like... the AI can take its own actions, even like circumvent things. Like he talked about, they have to worry about, you know, they have these benchmarks or these tests that they do for safety to make sure that the model can't do certain things, like can't tell people how to make smallpox or whatever.
So they have these tests, but they have to worry at level four that the AI will just like sandbag and pretend that it's not smart enough, even though it is because it knows it wants to pass the test. Yeah. Which is super interesting to think about. What do you do if these models can scale to super intelligence, smarter than us? How do you control something that's smarter than us?
It's just super fascinating.
I don't really understand the lower levels because what is... Does he talk about what practically is the difference between that and someone publishing a book? that has instructions on how to make smallpox?
He didn't, no. Yeah, I guess, so what you're saying is, like, how is level three and below anything new to the world? Is it just more efficient? Like, a dumber person could figure out how to make an atomic bomb because AI is so smart?
Given the stakes, it's like, if you're someone that's like, oh, I want to, like, unleash smallpox on the world, but I'm too dumb, and I can't figure it out. You know what I mean? It's such an ambitious goal, so it just... Like, to be that ambitious but, like, not just figure it out without AI, you know?
So he speaks to that, like, the world is... The state that it is, it's mostly been safe because the overlap of people who are extremely intelligent and the people who want to do a lot of harm to people is a small overlap.
Generally, there's not a lot of people that fit both those things, but the fear is that AI increases that overlap because now you take people who want to do a lot of harm and you give them intelligence they didn't have. I guess that's the vague general idea. Yeah.
I think this is where I would disagree with the way all these people think about it, because I feel like they look at it from this really academic point of view, which is. I have like raw horsepower intelligence and I have. you know, trained knowledge in something. And that's what gives me capability.
But in the real world, especially when it comes to violent stuff like that, none of that matters. It's all about motivation. Like if someone is really motivated, they will figure this stuff out. It's not like the thing that was blocking them was just like, oh, I'm not smart enough. You know, it's not really what the issue is.
I will agree that like a lot of crimes happen because they're more convenient and this would make certain things more convenient. I kind of see that point, but yeah.
So I remember when North Korea, there was a lot of tension with North Korea and like they were shooting a lot of rockets just to like flex their muscles. And there's a lot of talk about like how soon could North Korea develop nuclear weapons?
Is that not like that's not because they're not smart enough, not smart enough, but like they don't have the knowledge of how to do it or it takes years to develop that technology. Is that not something I could be faster at?
Yeah, it could be fast. I mean, it was taking the current form, right? If you're someone that is trying to go from not knowing how to do this to knowing how to do this, what does North Korea have? They have motivation, for sure. This is probably like their top priority. They have enough funding to... Figure it out. So given enough time, they will. There's like no stopping that. Yeah.
Do certain tools help them do that faster? Definitely. The same way that Microsoft Excel probably helps them figure out stuff faster.
Yeah, sure. Okay.
So I get why this feels like really specific, but... If you're talking about that level of impact in the harm space, we should see the equivalent level of impact for people trying to do anything good, right? So I'm not like, I want to cure cancer. And I'm not like suddenly as a random person any closer to doing that. Yeah. So yeah, I think that side of it is a little overstated.
I think they're kind of like... I think they're just kind of in this bubble. That's kind of a little bit like, like feeding this narrative into itself. So like, yeah, that's why the whole safety thing, I don't, I don't fully get it. Like every technology makes certain things more convenient.
It's a lot more convenient to produce firearms today than it was a hundred years ago, like much crazier firearms. And yeah, you have to think about it, but I don't, I just don't see that happening.
acquisition of knowledge being the place that people get stuck it's it's usually that the u.s and like all the countries try to have a crazy strict control over the raw material need to make a nuclear weapon that's probably where the bottleneck is and even that you know the countries work around there's always someone that's against the u.s that has access and
Yeah, I feel like this detail is kind of irrelevant in the grand scheme of things.
Yeah, if I'm being honest, I don't really buy all the AI safety talk. Like, it's so hard to know what's just noise. Like, what is just posturing and, like, even competitive. Like, some of these CEOs, there's a bit of, like, pulling the ladder up, right? That's been kind of at least theorized. I don't know if it's been proven.
But, like, when the people that have the biggest AI companies training the big extensive models are the ones leading the charge on, we need to make this harder. Yeah. I don't know. Is there some other motive involved? But yeah, I guess like and then it seems like the other dialogue, you don't know what like is grounded in reality.
There's so many people that talk about AI safety that don't seem to have any idea what that looks like. It's like at the government level, like they have no idea. Like nobody has any clue what that practically looks like. So yeah, it just feels like that whole conversation is either not grounded in reality or might have other kind of like hidden agendas behind it. I'm not scared.
I say bring on the AI. It's like if it could solve problems and make things easier, yeah, I feel like if it gives the good guys more tools too.
then yeah what's the problem i don't know yeah it's just funny because it's it's such a virtual thing it's like something that like you imagine someone going to a store and buying a physical hammer and like smashing your head in that's like so so real whereas like this is just entirely in the virtual space and it's just it's hard to imagine that uh you know it just feels like They have a point.
It's not that knowledge isn't harmful or dangerous, but just compared to just physical... Buying a vehicle and ramming it through a crowd is just so much more effective than anything that's bottlenecked by your knowledge.
I guess on the digital front, though, there is a lot of havoc that systems could do, like banking systems. If autonomous AI stuff that had its own agenda... It could cause a lot of problems in the world, even if it's only digital and doesn't have physical form, right?
Even if it's not its own agenda, if there is some system that's not controlled by AI and now there's a whole set of new vectors of, well, how can someone manipulate this system? Just because it's hard enough for us to create security around deterministic systems. This is like a not deterministic system.
So you never know if there's a certain set of words in the right order will make it ignore all the safeguards you put in place. So that to me is like a very practical application of AI safety. And that's not even like about the AI being capable, it's actually a flaw with it being not very capable that it can be like reprogrammed by accident in these little ways.
So I get that side of things for sure.
I listened to another podcast of Lex's with, I think his name was Adam Frank. He's like some kind of a astro something, astrophysicist maybe.
He looks at space.
He looks at space, but he like, he like thinks about his job is like, I guess they just got the first grant for looking for techno... What did he call it? Techno... Signatures? Techno signatures. It's like bio signatures would be like looking at a planet and saying, is there any like... Are there gases that would prove that there's life on this planet?
But techno signatures are like, does this prove that there's advanced technology? So they're like actually looking at...
exoplanets in the habitable zone or whatever uh and trying to find like signs that they have created technology i can't remember what some of them were a super fascinating guy just go listen to lex podcast what are you doing listen to us just go just listen like the last five episodes are all good i just listened to them all we're just we're now just a podcast that summarizes that other podcast
That's probably a thing.
That's funny. That's pretty cool. I think there's something else. There's some clips of some other thing I was watching that was somewhat similar. So did you talk about what... What is like the primary thing they're looking for? Is it, are they looking for like Dyson spheres? Like what are they looking for?
No. So he did talk about Dyson spheres, which I didn't remember knowing what those were. That's wild. Which I think they proved you can't, they couldn't actually make a Dyson sphere.
Did you just say you didn't remember knowing what that was? Do you mean like you forgot you knew about it?
Yeah, I forgot I knew about it.
And then they remind you that you did actually know about what it is?
Yes. Listen, I don't have a great memory. Okay. And I know I've heard of Dyson spheres, but until I heard him talk about them on this episode, I didn't recall.
It's like basically this big sphere around your star, around the sun that like all the energy from that sun, which that's another crazy thing that he talks about is like the levels of civilization, the level, whatever they are, the energy output. Yeah. But the technosphere thing, I think the main one they're looking at, what did he say? What did he say? it was not Dyson spheres.
It was satellites, radio waves. No, it wasn't waves. I don't remember, man. I'm sorry. It would be interesting content and conversation. I just don't remember. Oh, what would you personally look for? Well, let's see. What would I look for? If there was a civilization with technology, uh,
I would look for screens. I would look for... Wait, screens?
Oh, yeah. No, this isn't like they have images. He did talk about imaging. This is just... We're going off the rails. I could just talk about different podcast episodes forever. But he did talk about in the next however many hundred years that we'd be able to have Manhattan-size imaging, interstellar view, cities the size of Manhattan.
what did he say, 26 kilometer resolution or something on exoplanets. They have this idea for, it sounds like science fiction for sure. And that's the cool thing about science fiction. That would be crazy. The way it worked is like, you send all these like sensors, cameras, I guess, way away from earth, the opposite direction from the sun.
I can't remember how far he said in the solar system, but like long ways. And they're looking at planets that are just past the sun because of the way large bodies work space and time. Yeah. So the sun basically like focuses the image, uh,
of the star just beyond the edge of the sun and these cameras are looking at yeah it's super wild so using the sun is like this amplification of our ability to to view exoplanets anyway let's talk about something that's not on another podcast My memory's not good enough for this exercise.
You know what's definitely on other podcasts? It's the whole DeepSeek thing from this past week.
Yeah, we didn't really talk about DeepSeek much, did we? Can you just run that on your local machine? Can I just start getting coding benefits from DeepSeek R1 without an API call?
It's not really practical. I mean, it's like a reduced version of the model, and it's very slow, and the hardware requirements are pretty crazy. So no, you can't.
Okay, so how do people use DeepSeek R1 right now? How does it exist?
Is it commercialized in any way? There's like a hosted one from the company, but it's in China, so people feel sketched out by that. But then it's been re-hosted because open source has been re-hosted by a bunch of providers that you're familiar with. Like Cloudflare, I think, has a version of it.
Oh, okay.
There's been a few others. I don't think it's good. Oh, really? Well, it's like not better than anything else. It's just a... recreation of what's already there.
They did it for less. It's like the... Not Indiana Jones. What's the guy? MacGyver. They just MacGyvered it and they made it out of duct tape.
I don't believe any... I mean, it's just like... None of the information about it is true.
It's just... Oh, whoa, whoa. Stop. Hold on. Catch me up. I didn't know that it wasn't true. What are the facts that aren't true? Because I don't even know the facts around it, really.
I just heard it's cheap. They're claiming they did... They're claiming they trained the model for $5.5 million, which is like... a crazy uh man like several orders of magnitude less than what OpenAI's models costs. Everyone was like dunking in OpenAI.
Is it a currency thing? Or were they talking maybe yen or something? No, make it even cheaper.
Okay.
And you just think, you think, why would they, oh, because it's like a competitive thing? They're trying to lie.
So the reason, it's very noisy. There is true interesting things that they did. Like, so you can't take that away from them. Like, it's impressive. But that doesn't mean what they're saying about how it was done is true either. The numbers are just like way too much of a lie. Like, there's no way that one, they're that low. Two, there's a lot of reasons for them to make it up. Right.
No one's been able to reproduce it for once. That's not a thing. Could you make some of them explicit? Yeah, say some of the reasons because I don't always connect dots.
Well, China's not allowed to have certain DPs. What? Because of the export.
I didn't know this.
Because of the export controls.
Okay, you've got a lot of context here. You need to lay it all out. Spell the case out for why DeepSeek is a fraud.
On paper, NVIDIA is not allowed to export... certain levels of GPUs to China.
Okay. NVIDIA is an American company, right?
Yes.
Okay. See, these are things I just don't know for sure. So you got to spell it out.
So they can't be like, hey, here's exactly what we use if they're using a bunch of stuff they're not supposed to have. So that like throws a bunch of punch into this.
Sorry, going back just real quick. The reason they can't export them to China is like American law?
Yeah, yeah. We banned exports of GPUs above certain capability. Okay? Okay.
Got it.
There's another interesting fact that someone pointed out recently that Singapore is 20% of NVIDIA's revenue.
Okay. Is Singapore in China? I'm so dumb. I'm so sorry.
Singapore is like a very small island nation in that area.
Okay.
So it's China adjacent. Why would they be 20% of NVIDIA's revenue? That's a little weird.
Oh, so they're buying all the GPUs and then just taking them into China? Are they smuggling them?
These export controls practically are just not...
effective like it's like how do you exactly we're talking about earlier like there's always going to be a way if you're sufficiently motivated to to get these things um and of course there's something around it i could just buy a bunch of them and take them to china there's no one at china in china at china there's no one at china that's gonna stop me bringing them in right it's just that america is supposed to not the u.s is telling nvidia you can't do this um
And the other thing I was thinking about was like, man, what a deal of the century. You could just be the dude in Singapore smuggling this stuff, adding a 20% whatever. And that's like, it's like 20% fee on like $20 billion of GPUs. Like that's, that is crazy. That is really wild.
So the point is there's like so many reasons why one, they wouldn't say to like, well, sorry, one, they couldn't say what they actually did. And two, like, there's a lot of reasons to just, and that's what they always do. They always like lie about the price of things to like create, uh, just competitive noise in the market. Yeah. It's a good strategy. It works.
Yeah. Okay. But DeepSeek put out a paper. I know this because all the software engineering nerds who act like they're smart enough to understand papers are like, oh, check out this paper. This is amazing. Like you don't know what the paper says. Just stop.
If you literally put the paper into DeepSeek and talk to it about it, you would learn more than just listening to people talking about it. Yeah, probably.
Okay, but question. Did they not have to outline in the paper what hardware they use and all that stuff? I guess they don't have to, but would they not generally do that?
They talked about their techniques, and their techniques are interesting and novel, so you can't take that away from them. But then they separately claim that we use these techniques on this hardware. to achieve this outcome. But there's so many ways to lie about that.
If it's in the single digits of millions of dollars, I feel like there's somebody out there sufficiently motivated to reproduce. Can they not reproduce based off the paper or there's still some secret stuff?
The thing is, if you... Okay, let's say someone told you that, hey, I can... run a SQL query that filters a trillion rows in half a second. right? You, as someone that understands this stuff, you're like, I'm not even going to waste my time reproducing that because that makes no sense. Okay. Yeah.
So I imagine that something similar is going on here. So, so you're saying like, I have so many, I'm sorry, I keep interrupting you. I just feel like you're, you're moving a hundred miles an hour and I'm like at the stop sign still. So you're saying that like big companies just all believe this is a bunch of BS. Like it's just,
like the broader people in the know in the industry just dismiss this thing right out. And we're all excited about it, but they're like, yeah, whatever.
Yeah, I mean, just because it's such a hyped space, it's so hard to tell what's real and what's not. And also, the noise comes on both sides. So remember that we're saying novel because it's been published publicly. We don't know that OpenAI already ran across this and is using it to develop their stuff, the techniques in there.
So this might not even be a surprise to them as much as be, oh, they like independently came across the same techniques and they know that, yeah, it's not causing like a thousand X decrease in draining costs. But then the other noises, and so now, and this is a part where I'm like, okay, this could be noise from the other side, but I did think about this when it came out.
OpenAI is claiming that they have proof that DeepSeq was trained on outputs of their models or of like some maybe potentially like unauthorized access to stuff from OpenAI. Okay. And there's like some like, again, this doesn't mean anything, but like.
like the pseudoscience part of this is people were able to get DeepSeek to reply and make the exact same mistakes that O1 makes, which seems like maybe it's a coincidence. Maybe it means something. But yeah, the point here is like, it's just such a crazy hype space with a ton of money that there's like zero ability to draw any kind of, this is what's happening right now in the moment.
It's just impossible for situations like this.
Yeah, I guess has, like, OpenAI, so you said they said this thing about them using their outputs. Have, like, people like Sam Altman or any of the figures in this space come out and said anything about DeepSeek publicly?
You know how Sam Altman is. He just did the whole, like, generic, wow, it's really impressive, and I'm invigorated by the competition. You know, just like the fucking, he, to be honest, ChatDBT is more human than Sam Altman already. Yeah. Did you see what Claude did to me yesterday? No, what? Did you tweet about it or something? I can't believe it did this.
So I was trying to deal with, again, bringing it all back down to earth. I was trying to insert something into a Postgres database. And of course, on conflict, you want to do an update operation. Of course. I'm used to my SQL where you can just say on any conflict, do this operation. But on PostgreSQL, it specifies like, oh, when this conflicts, do that. When that conflicts, do this.
But I was like, OK, can I just on conflict on anything? Is that possible? And Claude, in a single reply, writes out, hey, yeah, you can do this. And then it writes out the query. And then right after it does that, it continues writing, being like, just kidding. That syntax doesn't exist. What? It said just kidding? Oh, my God. This tweet has 5,000 likes. I didn't even notice. What?
You tweeted this? I got to see this. I'm trying to do so many things. I was also trying to look up techno signatures because I feel so bad about not knowing. THDXR. Okay, so you just tweeted this recently? I tweeted it last night. Last night. Oh, wow, bro. Claude is straight up pranking me now. Can I make it do on conflict on anything? Yes, in Postgres you can use on conflict. Just kidding.
What in the world?
that's hilarious yeah what it's funny because we're so used to these models being quirky but like think about this in a traditional product like imagine you have a product and you have a button and the button is like click here to do something useful and you click it and it pops up being like just kidding we don't do that like that that would be so ridiculous to actually ship something that did that yeah uh like that's something the terminal would do
but this is like in in claude and to be honest something i just i've just been annoyed with claude more and more for the past couple weeks and this to me was like same this is like the final straw where i'm like you're straight up just joking right now like i i'm gonna actually consider i think i'm gonna stop paying for it i'm gonna re i need to like reassess what i'm paying for yeah uh yeah because i just keep signing up for them and then it's like it's easy to forget what
The Claude thing, I heard somebody or somebody tweeted this the other day that Claude was getting dumber. And he talks about it on the podcast. Apparently, Lex asked him a question from Reddit, which was like, why does Claude just keep getting dumber? And he kind of goes on to say that people report this on all the major models. This isn't just unique to Claude. Yeah, it's not.
He kind of explains like... It was kind of hand-wavy. I don't know. I didn't really take from it that I believe they don't get dumber. He said they don't intentionally... They never change the weights. They do sometimes change the system prompts and they change some other things. And I don't know. But he basically was saying like, most people, it's just like a psychology thing.
You're really impressed at first and then you just get less impressed over time.
That's what I was wondering. Is that the case? And the more you use it, the more you understand the boundaries, like...
But I do genuinely feel like it's gotten dumber in the last couple weeks. And I don't know what to do with that feeling. Because if I felt it, and then I read someone else felt it, and then I learned that Reddit feels it, there's something there, right? Because it's like things that I felt like it was doing a pretty good job... A few weeks ago, it feels like it's not doing as good.
Is it just a feeling? Yeah, I'm on the side that it's just a feeling. I mean, I think I would doubt that it's that clear cut. Like they must constantly be optimizing or like playing with the amount of compute they're allocating to inference. And there's like ways to like kind of make it more efficient for you to run.
Is that the kind of tinfoil hat theory that it's a cost thing, that they're just using less resources over time for inference?
They have to balance it. There's no way that on day one of releasing something, they nailed it and they never have to tweak that. I would be surprised if... there's not any thing where they explicitly know that, oh yeah, we did this because we made this trade-off. But I do agree that it must be a psychology thing as well, because if I really think about it, the thing that's not static is...
I'm trying to use this stuff more and more. And it's really hard to keep track. You know, it's that thing where, like, everyone's like, oh, yeah, I know what I eat every day. And, like, you know, like, I know I eat this many calories or whatever. But then you make them write it down. You realize, like, it's so different with people's, like, perception of how much they eat or what they eat is.
So I think it's kind of similar where I know that I'm using it, trying to use it more and more aggressively. And I know over time, as I get more comfortable with it or, like, becomes more and more of my workflow, I'm definitely pushing the boundaries of it more. That just happens with any tool. So it's hard to say that that's not a factor.
Oh, is that the end of your thought?
Yeah.
That wasn't good enough for you? No, it's good. It's good. I just thought you were, like, on a roll, and I'm, like, looking up techno signatures. Oh, my God. Just give up on techno signatures.
We've moved on.
No, I found it. I found it. So I'm going to tell you at some point. But I do want to respond to the last thing you said, which I totally knew what you were saying. Oh, I'm sorry. This has been a weird one. Yeah.
By the way, it just straight up smells like fire in my house right now. So I hope I'm not burning down. Yeah, that's not great. I think it's because Liz turned on the heater and like, you know, how's in Florida? You're not really supposed to use the heater.
Yeah, you never use it. And then when you turn it on for the first time, it does. It smells like there's like a actual like wood burning fire in your house. I know that smell. I did have a follow-up to what you said. Sorry, I had a question.
Do you know, when we were just talking about inference and the GPU resources allocated to inference, they have to use, I guess now, thousands of GPUs to do the training. Do you know, orders of magnitude-wise, what inference looks like compared to training resources, like infrastructure?
They still allocate most of their stuff to training, not to inference.
Okay, so if they have 10,000 GPUs, like most of the 9,000 are used for... I don't know the exact ratio, but I know it's more on the training side than on the inference side.
Okay. Yeah, I mean, it just makes sense because why when... If you don't win the model battle, the inference, the fact that people are using your product is kind of irrelevant. So it doesn't make sense to over-allocate there.
Intuitively, that made sense to me, and I figured that was the case. It's just interesting when you think about a business, the lifeblood of Anthropic or OpenAI is this huge farm of GPUs, and that huge investment in GPUs is useful for training new models. So they just always had to be training new models to get... the thing out of that huge investment. Right.
Which I guess they always will be training new models. So maybe it doesn't matter.
I mean, in the end to me so far, and I felt this from the beginning, this feels like the worst part of the sack to be in. It is the most difficult and the most expensive and it is the most like commodified. So yeah, I mean, I think the thing that people point out with deep seek is like, It's impressive to create something as good as open AI stuff.
It's totally realistic to assume making a model that's 1% better than open AI stuff costs like $50 billion. Like that's like totally realistic. And that's like an argument in favor of being like, this is why open AI will, you know, it's not really a threat to them.
Simultaneously, it's also like condemning this entire business because it's just like if it's going to take that much capital to make these marginal improvements and it's like a crazy competitive space where the costs are being driven to zero and all these companies are competing. Yeah, it's just... I don't know. To me, it never made sense.
If I was an investor, this is not the part of... And I want to bet on this AI thing. This just feels like the worst place to put your money. It's so intense. So capital intensive, right? Yeah. When I see that, I'm like, I need to invest in someone that benefits from... having access to cheap AI models, not the people building the cheap AI models. And yeah, like VC Twitter, like it's funny.
They just go on these little things. And currently they're on this, they've swung back and forth. And currently they're all saying, oh yeah, like the application layers where you're going to make a lot of money. But like, you know, a couple of weeks ago they were saying the opposite. But I do, that does make more sense to me. Again, it's not,
I'm not taking the moonshot bet because the moonshot bet is you invest in open AI and they eliminate the whole economy, which I get. And I like bets like that. It's just for me, this one is not the one that that would go for. Yeah, there's something like less crazy is probably going to be the outcome.
Yeah, and Sam Altman sucks. That's an easy way to not want to take that bet. Well, I mean, OpenAI or its competitors. It could be Anthropic or... Yeah, okay, sure, I guess.
One last thing on this. Yeah. I did come across something today. Do you remember Mistral?
Yeah. Whoa, yeah.
Okay, so... Where'd that go? They're like, this is maybe the worst company fundraise of all time because... They raised, like, $150 million on, like, a $300 million valuation or something. What? Like, gave up half the company? Yeah, so they, like, gave up half their company. And, like, that is nowhere near enough money to, like, play in this game.
Like, they're trying to do the frontier model thing. Like, they're on that.
Yeah, exactly. Oh, jeez. What the fuck are they going to do? I mean, maybe they're like a French company and maybe it's just like they're just going to serve the French market because I guess the company's there.
Maybe they're going to train models for a thousand times cheaper than OpenAI. Maybe they're going to go the deep secret.
That's possible. But again, like you just get away with it if you need any more money.
Yeah, that's crazy.
If you need one million more dollars, like what? What deal are you going to make?
I didn't remember if Mistral was... There's been so many of these companies doing image stuff. I think the image space is even more messed up in my brain. And I thought they maybe were one of the image generating things, but no. I want to talk about the app layer, the AI app space, because that's also kind of top of mind for me. Maybe it's because VCs are excited about it.
And Mark Andreessen was just on Lex Friedman. And like I said, I've listened to all of his podcasts. I want to talk about my experiences and I want to hear from you how you think about those companies. But first, techno signatures. The main one that we're looking for is chlorofluorocarbons because nature can't create those. That requires... some sophistication technology.
And like he talked about Earth, we pumped so many in the atmosphere that we blew a hole in the ozone layer and that that would be detectable using the right instruments from far away. That seems pretty solid. I'm increasingly convinced that there's nothing out there, but... Really? Oh, because I'm increasingly convinced. I listen to a lot of sci-fi.
Now I'm increasingly convinced that it's everywhere. It's a dark forest. They're all out there. They're just being quiet. That's how I feel. Tell me why you feel that way. Why do you think there's nothing out there?
I desperately want that not to be the case. And I think in a lot of ways, it's unlikely that there's nothing out there. But man, given just the size of the universe, when I say nothing out there, I mean, even if there is, it's not in our... perceivable universe or whatever. And like, you know, the galaxies are separating faster and faster over time. Right. So like, there's no way we'd ever reach.
Yeah. So it just feels like, I don't know. It feels like a, I just get like a much, I get like a negative feeling. feeling towards that whole thing. It feels like so impossible and unlikely, but again, not based on science, just based off of how I feel.
Yeah, just feel. I guess like, okay, I have a lot of thoughts. First, you just said that and it reminded me that I just heard how things can't travel across space and time faster than the speed of light. according to our understanding of physics. But the actual universe moves faster than the speed of light. So yeah, the galaxies moving apart are moving faster than the speed of light, right?
Because there's new space being created in between them.
I guess.
If you map that to velocity, I guess.
I mean, I'm just making this up. Now you're really losing me.
It's like if I magically created...
between you and me more space it's like we've moved further apart at a certain rate right but we didn't yeah so maybe that's what he's talking about i don't know i i just got ahead of my skis here just even just trying to think about what you're saying but i think what adam frank just said on this podcast was that space time moves faster than the speed of light
like the expansion of it, but you can't, an object can't move across space and time faster than the speed of light. But if it is true that the galaxies are moving apart faster than the speed of light, then yeah, you could never get to another galaxy because we can only dream to ever move at the speed of light, which would be a crazy accomplishment.
But if it's moving faster... Yeah, unless you, like, do something crazy, like you violate or, like, you have a completely new model that, like, just... Yeah, it just totally breaks that. But, yeah, outside of that, you know, speaking, quote-unquote, practically, whatever that means in this space, like, yeah, maybe our galaxy is explorable.
And, man, like, even that just feels like... I can see there being nothing there.
So, okay, so my stance, I guess, it's that... It's about time. It's not about distance.
It's like... How long stuff has been around for.
Maybe... Yeah, maybe civilizations... And I'm stealing this from all the various science fiction writers and actual scientists that I've listened to in the last year. But... Yeah, maybe it's that like...
intelligent uh societies just don't last very long so the chance that overlap is happening like the you know our whatever 100 years 200 years of technological advancement here is just such a tiny little blip in the broader expands to the universe, that the chance of that blip happening at the same time as a bunch of other blips is maybe super low.
But that maybe life is super common, just not intelligent societies that last long enough. If we can get past... Adam Frank talks about this too. If we could get past all the terrible things that could end our civilization, whether that's nuclear war, climate change, AI, whatever.
If we get past all those hurdles and we can figure out how to live for hundreds of thousands of years, millions of years as a civilization, then the chances of finding life maybe is more realistic because you're around long enough to see. I don't know. I'm just saying stuff that I don't have any credibility to say.
this is all just like different answers to the Fermi paradox thing. But to me, I find the problem with the Fermi paradox, which is just to reiterate, it's a given the size, the age of the universe, we'd expect it to be like full of life, even how like long stuff has been around and given how much there is. And there isn't. So then you ask, okay, what are some explanations of that?
And there's a lot of good explanations. That's a problem. There's so many good explanations and they could all be true, but the result of all of them, is that life is exceedingly rare and you're unlikely to intersect with it. So that's what kind of bums me about this concept.
It bums you out because it would be nice to like... I don't want to die, but if I'm going to die, it's because of an alien invasion. I'm kind of down for that because at least I learned something deeply important for a few seconds before I get wiped out.
interesting okay yeah like i don't want to die in like car accident like it's done oh yeah no that's terrible like yeah like i want like if i'm gonna die at least give me some crazy existential moment okay yeah what's your like top three ways to die what would be i'm sorry yeah yeah no i got you existential dread and etc etc okay anyway so
Let's talk about the AI app stuff. So this idea was seated in my head just a few days ago when Andreessen was on Lex and he talked about... I think the example they used was email. AI first email. And how so many apps just have AI bolt-ons now. We have a little button in the corner that's like, ask AI. But...
companies that are started with the whole premise of like rethinking the product, the entire category of product with AI first. So he used the example of an AI company building an email client or something, which I've now, I think I've downloaded. I don't know if it's the one that they're invested in, but he kind of threw that out there and just said like all the different categories.
And then I heard you, did you tweet about this?
No, I told you something that you can't repeat. Oh, yeah. It's not public information yet.
Which I will not repeat. Thank you. Okay, that's what it was. Yeah, it was a DM. I knew some other data point hit my brain that was like, oh, the app layer of AI. That's a thing. And it's like when you learn a new word and then you start seeing it everywhere.
So could you tell me with your big brain that you've been thinking about this probably for like 10 years, could you tell me what is going on in the app AI space?
Yeah, so the way I look at it is there's a new capability. Again, this is, I would categorize it in two categories. There is the boring parts is what we're talking about now. And this is the bet that society will continue to be roughly the same. And this isn't like a...
know truly disruptive like a totally disruptive thing um you're speaking to like the bolt-on thing or you're speaking to like the commodity of like foundation you're just talking about like building a traditional product but thinking through ai that's like a not very bold like way of looking at all this so part of me like doesn't want to engage with that because like i said i don't i don't believe so far in that it's like much bigger bet
but i believe generally that's where you should put your attention and things that kind of fall in that category that said let's say this this ends up you know not being that crazy thing and this is a way this is the direction things go so right now we're in the era of there's a new thing and nobody knows how to build good ux around it right there if you imagine when like the iphone came out
pull to swipe or sorry, pull to refresh. Oh yeah. Someone had to come up with that. And the moment they did, it was so obvious. I think we're like in that phase where almost every single product that added AI is just a stupid ass little button that's on top of other shit. And it's just kind of getting in your way and you're always accidentally clicking on.
So that's just like, that's the era we're in. But at some point we'll see stuff that is like, oh, obviously. And I think we're actually already starting to see some of that stuff. Have you seen this Granola AI product? No. Okay, so I think it's a brilliant example of what you're talking about, rethinking products from an AI lens. And they did it in a way that is very well executed.
It's not the first thing you would think of, right? But they were like, okay, problem existed forever. how do we make people who take notes for meetings, how do we make that easier? Boring problem, been around forever, years of products that do that. Bunch of AI products that do that, right?
There's a bunch of AI products that are like, I'm Bob the AI and I'm a bot and I've joined your Zoom call and I'm here to take notes. And it's just like, Weird, totally unnatural, not relating to your current habits thing at all. Weird social norms around it. Like it's just not a good way to introduce this idea to people. So what this product does is it runs on your Mac.
it records all the audio from your, um, your meeting. Yeah. From anything that's happening. So we're like, we're also in this era where like no one's doing like direct integrations anymore. Cause AI can just handle a like raw input. So if you can record audio from your Mac, you now support every single app.
That makes so much sense.
The box, right? Yeah. This shows up in a bunch of different places. Uh, when people are putting AI products, uh, It's totally invisible and it's totally out of your way. They give you a typical notepad you take notes on. Okay. You take your shitty little notes, you know, you comments here and there, whatever.
When the meeting is done, AI will go through your notes and augment them with what it knows about the meeting. So if you're like, oh, priority, it knows what you were talking about. saying this is a priority and it'll like make your notes much nicer and just like a one-step process. So it doesn't feel like an AI product. It just feels like a magically good product.
I take notes with the same habits that I've had forever. And then at the end, I just get much better notes than I would do with any other app. And I think this is kind of what you're talking about where they're re-imagining it and they've done it in a way where it's not like, you need to chat with my AI bot, right? It's like totally invisible. Super smart.
So I think we'll start to see products that they are technically powered by AI, but it's invisible. The only way you can tell is the outcome or the quality of the product is much higher just because all of these structuring unstructured data problems are like effectively solved now.
Man. Does that make sense? Yeah, it makes a ton of sense. I've already downloaded Granola now. I feel like this is very exciting as a person who has an entrepreneurial side. It just kind of makes you want to build like a million companies. Not a million. Let's build one. Yeah, just like one company. It just makes you want to build something, doesn't it? It feels like the Wild West.
It's like starting over. All the digital products we use just could be reimagined. And there's so many categories of those. And it kind of makes you just want to build some of them.
I do think, though, that people should be aware that this isn't a reset to, like, 2010. Because in 2010... What was 2010? Like, you know, it was a similar situation. Like, nothing was built and there was, like, all these opportunities to build these pretty, like, basic, straightforward applications.
Wait, 2010, what was the new thing that enabled, like, mobile? What are you talking about?
Just, like... More internet, more web, more capability of like SaaS, like was kind of created in that era, all that stuff. Gotcha. In that time, you were shifting people from not using computers to using computers to solve this problem. So as much as it feels like, oh, we're in a reset and there's always a new opportunity, it's not the same because you can't just deliver an MVP. Oh, sure.
You can deliver an MVP in 2010. But if you want to build a new email AI product, you need to build something as good as superhuman, as a floor. And then you can do...
The stuff that's extra stuff.
Innovative, right?
Yeah.
Okay. It's still going to be quite hard just because the bar is very high to get something to switch from something that just out of all the normal app features are pretty exhaustive and work pretty well. That said, that side of things has also just gotten easier to do as well. But yeah, I am feeling this with Radiant because yeah, categorizing financial transactions was very, very difficult.
like prior to AI. And now it can do like a really good job. Even like a shitty thing I implemented, I like was able to go through my stuff with like, and I've done this for years, right? Like I manage all my business transactions. I've gone through every single one of them for years and years. And just having AI do a first pass and then me doing a second pass, it's much better.
And this is just the beginning of all this stuff. But we still have to build like the entirety of a straightforward app. And you have to do that while the incumbent fails to do the new thing, which I think will happen. It's just, you know, not as easy as it seems.
Yeah, there's like the table stakes part that's kind of boring where you just have to have all the features that people expect from an app like that. in order to unlock the new way of thinking about it. So for the Granola case, it was like they had to build an actual note-taking app and all that comes with it.
That's a good example of something that works because those table stakes scope is really small. And they benefit from this new dynamic of not having to do 100 integrations with every single, like we support Zoom, we support Google Meet, we support whatever.
And how did you explain how that dynamic came to be? Because I get it for like recording audio. it just works for everything. But what you're saying, there's this whole era of not integrating directly with stuff. What's that about?
Yeah, so let's say you're like, I mean, let's say we're not actually doing this, but for Radiant, there's 5,000 financial accounts that we need to support for all the various places people have their data. We could just send AI to go like,
visit the site for you and like figure out how to pull out your information instead of like mainly doing integration with which is each thing because i can operate at like a like one level down like it doesn't need an api a developer needs an api like an ai agent like in theory doesn't need one so you can kind of like give it a general set of instructions that'll work on any raw input so anywhere where you like needed all these like nice clean integrations that you can probably make do with
a much messier unsanctioned integration. Interesting.
Okay, that didn't really answer my question. I mean, I don't feel satisfied. Maybe it did, but I think it's like there's another example and I can't remember. I feel like there is another company where it was like, oh, that's a clever way of integrating with everything. Oh no, it was the conversation we had about like an AI tool that just looks at the file system.
Use that as a source of truth and you don't have to integrate with every editor. You just interact with the file system. Yeah, so that's like a very clever way to get around This like this thing on your landing page where you have all the things you support. Yeah. It's like just what's a common denominator.
The other side of this, though, is if you look at a lot of these products like granola, like there was the other one that I forgot the name of it. It records everything. It takes a good screenshot every three seconds and then like has AI index it. And you can ask it like, hey, what was that thing I read the other day about whatever?
So you see how all of these things are native apps at the OS level. It just brings up the question like, isn't Microsoft and Apple just going to bake these in?
Oh, yeah. If you're building that kind of stuff, it's scary.
Yeah, if you think about this stuff, we're getting these one-off solutions that people come up with, but at the end of the day, if it was just integrated at the OS level, it would just work everywhere and kind of be just a lot more awesome.
Yeah. So it feels like that should be the ultimate thing. The Apple intelligence kind of thing, like...
apple intelligence should do that stuff if it ever actually does anything sucks apple intelligence sucks but in theory this doesn't even work yet like i don't even think they did they like turn it off because it was like doing bad things i've had it for a while and i have not used it once i think somehow it's made things even worse i feel like i used it even less now than i used to I don't know.
I don't know what they're doing. It's pretty bad. Hopefully they do that thing where they catch up really fast because I would like Apple software to be good because I love their hardware.
Yeah, we'll see. But I will say this. This type of thinking is new for me where I'm like, see how I described a very clearly good opportunity. And then the ideal, which would be like, you know, Apple, Microsoft integrating, but that ideal might be 10 years away. So there's still plenty of time to make money, you know, successful in that time. Yeah. But I've like shifted to like not.
If I can see the ideal and it's not aligned with what I'm doing, I just don't want to work on it. It just feels bad to me now. Even if it's 10 years, you just don't want to invest in that idea. Yeah, I want to have a real shot of... building the ultimate thing. Even if that means, even if the opportunity is great, otherwise.
Are you quitting terminal? Is that what you're saying? Is it not AI enough for you? It's not AI enough. You missed the meeting yesterday.
I'm just saying. I was the only one that remembered the meeting. Yeah, that's true. That's the funny part. There was no meeting. We have weekly Wednesday meetings and I was like, oh, I can't make it. So I posted at 2.30 when we have the meeting. Hey guys, I can't make the meeting. And nobody else said anything. The meeting didn't happen. So everyone missed it.
I was the only one that actually remembered that it was supposed to happen.
It only would have happened if you started it. But the fact that you didn't start it because you weren't going to make it.
It's funny. There's something else I wanted to talk about. It's totally unrelated to all this.
Totally unrelated to AI and apps and aliens.
yes uh i posted a video last week or was it earlier this week no it was it was on sunday post on sunday best video i've ever made in terms of oh really views yeah i gotta check out the sst youtube a video again i think it's it's really not the execution of the video i think we're just picking like some pretty good topics what's your handle i did it i did it just
Nope, that's something Korean. That's definitely not it.
What? Really?
At SST.
I mean, I guess.
SST dev.
You don't have to look it up. I'll just tell you.
I got it. No, I got it. I don't use my computer. Is it that one? Yeah, so I made a video on my remote dev setup. Oh, I've been wanting this video. I can't believe I didn't see it. How do I not see? This is how big the world is. Anytime you think everyone just sees all your stuff, if anyone sees your videos, I should see your videos.
right that's true yeah and i didn't know you made this video like are you ever on youtube no no i go to youtube from twitter links that people post would you see it oh because you think i would see i mean sometimes you would think i would see your tweets i don't know that's true i feel like we're friends and i should know when you make a good video that i really want to see and this is one i've wanted you to outline because i didn't want to bug you too much and be like hey could you tell me how you do the remote team bucks thing but now you've just made the video and i can watch it like every other normie this is awesome
Yeah, it was. I think a lot of people were waiting for it, which is why I think it did pretty well. So this is our best performing video ever, which we're really happy about. I love the title. I don't use my computer. Yeah.
I mean, the thumbnail.
Yeah. So YouTube comments. Let's talk about YouTube comments real quick. Oh, yeah. For me personally, this is where I experienced just like. the dumbest of all humanity. I think it is really wild that people like I've been on Twitter a long time. Of course, I get dumb, annoying comments there, but YouTube somehow just consistently tops it. It surfaces persona that I run into a lot on the Internet.
And to me, it's like a very miserable persona. It's a persona of someone that thinks that every single thing they interact with is a scam somehow. Like they're like, they're so eager to be like, I think what's driving them is they want to feel like they're smart and they like picked up on something that everyone else is falling for.
But they're so desperate for that moment that every single thing that they perceive, they like project onto it that, oh, this is like a scam somehow. Yeah. So a bunch of people were just like, this is an ad or like, They were talking about how, like, I only do this because it's free. Because I mentioned that my server that I use now is sponsored. But, like, I've been doing this for years now.
Shout out to ReliableSite.com.
ReliableSite, yeah.
It's very reliable.
But, like, I literally was paying for it before I got that deal. Mm-hmm. And also in the video, I outline how you can start really small and the entry price for this. Again, people love saying $5 VPS. It's just a $5 VPS. Realistically, maybe more like 15 for something that's decent, but reasonable price.
But everyone was just like, as soon as their brains work together, like, oh, this is the angle. A bunch of comments were around talking about how like, I was trying to trick them into doing this because it's expensive.
Hmm.
And I'm just like, how, like, how do you go through life like this? Like everything must be so miserable if you're just perceiving it as like every person you interact with is trying to rip you off somehow, you know?
Yeah. The internet kind of sucks. It's kind of amazing, but it also kind of sucks. I'm just reading YouTube comments now. I wish I hadn't. Sorry, would you just not, just don't remind me that YouTube exists and I'll be a happier person.
That's funny. What's even at the top right now? I think one of those is probably at the top.
So it's funny. I just saw Kevin Naughton commented an excuse to not do any work for the next three or four weeks. I really do need to spend like two days and just like copy all your NeoVim setup and my NeoVim is so bad right now. I know. I just need to do all that work. And I just, it's so hard to take a time out.
It's that stupid meme that I do hate because I resonate with it of like the cavemen with like the square wheels. And they're like too busy. Leave me alone. And the guy's like, but here's a wheel. It's that, but it's just so hard. Maybe you should just go use Cursor. You know what? I've actually thought about downloading it. I'm doing it right now. I do want to... I have it downloaded.
Yeah, I want to download it. Like, why have I... It's like, all this stuff is free. Paid for by VCs. Why am I not using all of it? Just play around with it.
It's not free, but like... It's not free?
I don't think it's free.
You have to pay for... It's not crazy. Yeah, it's not that expensive. I just assumed it was free. It's just so miserable for me going... Yeah, this is like another point of... for me, which is being very dramatic. Just dress around my editor. I really like NeoVim and it is truly incredibly productive. But this cursor style of thing, if it continues to get better...
That's just going to be the most productive thing. But it doesn't address the parts that I particularly find annoying. I hate the clunkiness and the slowness of VS Code and navigating and stuff. And yes, you're doing all that less with this type of thing, but it's not taking it to zero. I don't see why NeoVim would get something that's equivalent. I've seen the current effort for it.
And I go visit the GitHub and I read it once a week. And I'm just like, this just doesn't feel like... it's going to be good. And there's so much setup involved.
Yeah. It's the we have cursor at home and it's like cursor at home is like four libraries duct taped together and like socks. Like why am I installing something like that on my machine? What is going on? It's like there's too much. There's too much steps. Too many steps.
I don't mind switching editors. I just wish the foundation that this new stuff was built on was not VS Code because VS Code sucks. That said, I think Zed Will probably, because they're in this hyper-competitive mode. Wait, you think they will what? I think their AI stuff will get as good as Cursor's, if not better.
So they are working on AI stuff then?
They have to be.
Because I just had the thought in my sleep last night, which is just an indictment on my sleep. I had the thought like, oh, poor Zed. How does Zed have a chance when there's all these AI things now? But they're doing the AI thing. It's like there's so many editors already. If you're not an AI editor, good luck.
Right? Yeah, no, it's, it's true. Like they have a tough battle because they, okay. It kind of goes in two directions on one hand, like, yeah, it was way faster to ship cursor by building on VS code. On the other hand, I've just found as I get older, that doing the more extreme thing always ends up having a good benefit that you can't predict. So them going ground up, building a new editor,
way harder all the ship fast mindset would be like that's a waste of time just focus on the part that differentiates ai part but i can see how
actually know like this is going to end up being the thing that wins so to me it's plausible like i don't i don't think they're screwed uh and that they are going to do ai stuff yeah i just didn't even know they were working on if they're working on the ai stuff then yeah good for them and they're not built on i have no idea i don't keep up on this stuff i've just i mean i use neovim because someone said use neovim so i do i mean they say ai in their um
integrate upcoming lms your workflow generate transform analyze code so and cursor is not a lot of features it's like a really small set of features to be honest i've never played with it i'm literally setting it up right now but yeah so i'm like okay that gives me some hope because maybe the editor experience won't suck but then it's not in the terminal anymore so then my whole setup
is now like a lot more confusing. Like I like having everything in a single terminal and switching between it.
Yeah, all my muscle memory is around like switching between Tmux panes and doing all this stuff. And if I'm just in some editor now, I guess like I can get the Vim experience in the files, the actual files I'm modifying, but like, Okay, can I go back to something just on behalf of the normies that listen to us? Why is VS Code bad again? I know we all hate VS Code, but just someone remind me.
Why is it bad?
Whenever I try to use it, it's like a slow piece of shit and the Vim emulation is like really bad.
So it's slow.
Yeah, to me it feels bad to use.
Okay. I just take everyone's word for it when everyone's like making fun of VS Code. I'm like, yeah, VS Code. But I didn't actually know why.
It just doesn't feel good to use. That's really all it comes down to for me. Okay, okay.
Oh, I'm going to try Cursor. I'm going to give it a go. I hope it doesn't botch the whole terminal code repo. YOLO.
Here we go. Yeah. Zed does have this. They have their own remote protocol thing, so I could continue to effectively host Zed on my server, even though the front end of it is running on my machine. That's cool. But again, then I have to like... have like a separate terminal window, unless my terminals run inside of Zed.
Ah, just use the integrated terminal. I hear it's good. Skeptical, but. Skeptical. I'm going to give it a shot. I'll let the listeners know if Cursor's good. They probably already know, but I'll let you know.
No, use Cursor and use Zed. And then go fix your NeoVim. Yeah, I need to fix my NeoVim.
Okay, I'll try Zed. If Zed has AI stuff, I'll start there, actually, because I'd rather use the thing that you think is good, generally, in life.
Introducing Zed AI. This was, like, in August. They're definitely stuff.
Definitely stuff. Zed. What is it? Zed.dev. The editor for what's next. Humans and AI.
Let's go. I had this thought the other day. I was like, if you're, like, a VC-funded company... You've probably shifted towards AI. If you look at everyone's websites, no matter how random it is, they seem to really focus on AI. Most of them just took their existing slogan and added and AI to it. Wait, is that literally what Zed did? Maybe. Yeah, with humans and AI. And AI.
I saw something the other day, and I was like, yeah, I'm looking at Terso's website, and at the bottom now they have unlimited databases, personalized scale, supercharge, which, you know, probably was there before your LLM applications. So there's like that. Okay. We've all observed this, you know, whatever.
But then I think about, okay, there's VC funding companies at this stage that had not done this at all. The guys that see hasn't done this, but like ignoring us, um, And I'm like, what is that like? I'm like, yeah, like Bun didn't go and add like the best way to run JavaScript for humans and AI, you know? That's a good point. I'm not making fun of Zed because with Zed it actually makes sense.
But a lot of just general purpose things have now added and AI to it. So I'm like, how are they thinking about this stuff? Like they're just in a way like heads down ignoring it. I'm sure they're not actually, but like, you know, their strategy is heads down ignoring it. Yeah. Oh, what?
All right, this is probably a coincidence, but I went to Bunsite, and they have a used buy section, and one of them is Midjourney. Oh, so they also kind of... Maybe that's their, like... That's their little tip. It's just a coincidence.
Tip of the hat to AI. Used by X, Typeform, Midjourney, and Tailwind. That's an interesting collection of companies. You know who else uses it? Terminal. Terminal. We've got to get the Terminal logo on the Bunsite.
Terminal uses it. Let's go. I... I think I might be the number one bun user. You might be. I think I'm the number one bun user because I use it everywhere.
Actually, I've been bun-pilled. I'm enjoying bun quite a lot because I just copy everything you do.
I cannot stop talking about how good their product execution is. It is so good. Yeah, they're incredible. Every single time they put out a feature, I've been like, And I don't get it. And then fast forward three weeks later, I'm using it. Like it just like invisibly just snuck into every little piece. So we're launching a new update in the SD console.
We have this workflow section that's the config where you can like set up your CI steps. And before we didn't let that be configured. So most people don't have to. The defaults make sense. But if you want to configure it. We were like, okay, how do we like let you run shell scripts, but like in JavaScript and have your own JavaScript conditionals. And we're like, okay, fuck it.
We're just going to drop bun shell in there. So now the config is just like your workflow is just bun shell. And they already figured out all of that stuff. So really great product execution. Amazing.
There's nothing better than that. Like a weight dollar sign and then put your shell command in there. That feels so good.
Yep. Yep. Yep. Cool. All right.
I got to go for non-biological reasons. Okay, no, they're biological.
No one believes you.
I gotta go, Dax. When I say I gotta go and you're like, one more thing and then you have like four more things. We could pause if you want to do a two-hour episode.
Okay, no, that's fine. You can go. You don't want to talk to me. It's fine.
I want to talk to you. You don't want to talk to me. It's fine. I'm going to pee myself.
This is our last episode ever. We're not going to do this podcast anymore. Stop it. Adam doesn't want to talk to me.
Okay, I'm going. Goodbye.