Menu
Sign In Add Podcast

Sualeh Asif

Appearances

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1008.098

had this bet on whether 2024, June or July, you were going to win a gold medal in the IMO with models.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1019.922

Yeah, I was International Math Olympiad. And so Arvid and I are both, you know, also competed in it. So it was sort of personal. And I remember thinking, Matt, this is not going to happen. This was like, even though I sort of believed in progress, I thought... you know, I'm a girl just like a modest, just delusional.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1042.281

That was the, that was the, and to be honest, I mean, I, I was to be clear, very wrong, but that was maybe the most prescient bet in the group.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1310.409

I don't know if I think of it in terms of features as I think of it in terms of capabilities for programmers. It's that as the new one model came out, and I'm sure there are going to be more models of different types, like longer context and maybe faster. There's all these... crazy ideas that you can try. And hopefully 10% of the crazy ideas will make it into something kind of cool and useful.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1337.143

And we want people to have that sooner. To rephrase, it's like an underrated fact is we're making it for ourself. When we started Cursor, you really felt this frustration that, you know, models, you could see models getting better. But the COBOL experience had not changed. It was like, man, these guys, the ceiling is getting higher. Why are they not making new things?

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1361.22

They should be making new things. Where's all the alpha features? There were no alpha features. It was like... I'm sure it was selling well. I'm sure it was a great business, but it didn't feel, I'm one of these people that really want to try and use new things. And it was just, there's no new thing for like a very long while.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1423.755

Yeah, it's like the person making the UI and the person training the model sit 18 feet away.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1432.103

Yeah, often even the same person. You can create things that are sort of not possible if you're not talking, you're not experimenting.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1520.936

One of the things we really wanted was we wanted the model to be able to edit code for us. That was kind of a wish, and we had multiple attempts at it before we had a good model that could edit code for you. Then after we had a good model, I think there'd been a lot of effort to make the inference fast for having a good experience.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1545.539

And we've been starting to incorporate, I mean, Michael sort of mentioned this, like, ability to jump to different places. And that jump to different places, I think, came from a feeling of, you know, once you accept an edit, it's like, man, it should be just really obvious where to go next.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1564.172

It's like, I made this change, the model should just know that, like, the next place to go to is, like, 18 lines down. Like, if you're a WIM user, you could press 1, 8, JJ, or whatever.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1576.32

but like why why even why am i doing this like the model the model should just know it and then so so the idea was you just press tab it would go 18 lines down and then make it show you show you the next edit and you would press tab so it's just you as long as you could keep pressing tab and so the internal competition was how many tabs can we make someone press it once you have like the idea uh more more uh sort of

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1599.492

abstractly, the thing to think about is how are the edits zero entropy? So once you've expressed your intent and the edit is... There's no new bits of information to finish your thought, but you still have to type some characters to make the computer understand what you're actually thinking. Then maybe the model should just read your mind and all the zero entropy bits should just be tabbed away.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1626.983

Yeah, that was that was sort of the abstract.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1790.657

Yeah. And then like launch. Hopefully jump to different files also. So if you make an edit in one file and... Maybe you have to go to another file to finish your thought. It should go to the second file also.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1855.046

Oh, yeah. Oh, we did that. We did that.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1919.042

We'll probably have like four or five different kinds of diffs. So we have optimized the diff for the autocomplete. So that has a different diff interface than when you're reviewing larger blocks of code. And then we're trying to optimize another diff thing for when you're doing multiple different files.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1940.949

And sort of at a high level, the difference is for when you're doing autocomplete, it should be really, really fast to read. Actually, it should be really fast to read in all situations. But in autocomplete, it's sort of, you're really like your eyes focused in one area. You can't be in too many, the humans can't look in too many different places.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1963.527

On the interface side. So it currently has this box on the side. So we have the current box. And if it tries to delete code in some place and tries to add other code, it tries to show you a box on the side. You can maybe show it if we pull it up on cursor.com.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1979.858

So that, that box, it was like three or four different attempts at trying to make this, this thing work where first the attempt was like these blue crossed out lines. So before it was a box on the side, it used to show you the code to delete by showing you like, uh, like Google doc style, you would see like a line through it. Then you would see the new code. That was super distracting.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2006.361

And then we tried many different, you know, there was sort of deletions, there was trying to red highlight. Then the next iteration of it, which is sort of funny, you would hold on Mac the option button. So it would sort of highlight a region of code to show you that there might be something coming. So maybe in this example, like the input and the value would all get blue.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2033.51

And the blue was to highlight that the AI had a suggestion for you. So instead of directly showing you the thing, it would show you that the AI, it would just hint that the AI had a suggestion. And if you really wanted to see it, you would hold the option button and then you would see the new suggestion. Then if you release the option button, you would then see your original code.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2066.662

Again, it's just non-intuitive. I think that's the key thing.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2173.66

So I think the general question is like, Matt, these models are going to get much smarter. As the models get much smarter, the changes they will be able to propose are much bigger. So as the changes gets bigger and bigger and bigger, the humans have to do more and more and more verification work. It gets more and more and more hard. Like it's just, you need to help them out.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2195.551

It's sort of, I don't want to spend all my time reviewing code.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2485.759

Contrary to popular perception. It is not a deterministic algorithm.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2563.002

Maybe we should talk about how to make it fast. Yeah.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2688.277

And then the advantage is that while it's streaming, you can just also start reviewing the code before it's done, so there's no big loading screen. So maybe that is part of the advantage.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2705.871

I think the interesting riff here is something like, like speculation is a fairly common idea nowadays. It's like not only in language models, I mean, there's obviously speculation in CPUs and there's like speculation for databases and speculation all over the place.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2835.609

By the way, that's like a really, really hard, it's like critically important detail, like how different like benchmarks are versus like real coding. Where real coding, it's not interview style coding. It's you're doing these, You know, humans are saying, like, half-broken English sometimes, and sometimes you're saying, like, oh, do what I did before. Sometimes you're saying...

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2863.817

you know, go add this thing and then do this other thing for me and then make this UI element. And then, you know, it's just like a lot of things are sort of context dependent. You really want to like understand the human and then do what the human wants as opposed to sort of this, maybe the way to put it is sort of abstractly is the interview problems are very well-specified.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2888.59

they lean a lot on specification while the human stuff is less specified. Yeah.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3092.151

Well, it's not like conspiracy theory as much. They're just, they're like, they're, you know, humans, humans are humans and there's, there's these details and, you know, you're doing like this crazy amount of flops and, you know, chips are messy and man, you can just have bugs. Like bugs are, it's, it's hard to overstate how hard bugs are to avoid. Yeah.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3482.458

So, I mean, one of the things we do is, it's like a recent addition, is try to suggest files that you can add. So while you're typing, one can guess what the uncertainty is and maybe suggest that like, you know, maybe you're writing your API And we can guess using the commits that you've made previously in the same file that the client and the server is super useful.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3515.659

And there's like a hard technical problem of how do you resolve it across all commits? Which files are the most important given your current prompt? And we're still sort of initial version is ruled out and I'm sure we can make it much more accurate. It's very experimental.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3534.352

But then the idea is we show you, do you just want to add this file, this file, this file also to tell the model to edit those files for you? Because if maybe you're making the API, you should also edit the client and the server that is using the API and the other one resolving the API.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3550.544

So that'll be kind of cool as both there's the phase where you're writing the prompt and there's before you even click enter, maybe we can help resolve some of the uncertainty.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3980.698

And if you can make the KV cache smaller, one of the advantages you get is like, maybe you can speculate even more. Maybe you can guess, here's the 10 things that... could be useful. Like, predict the next 10, and it's possible the user hits the one of the 10. It's a much higher chance than the user hits the exact one that you show them.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4001.197

Maybe they type another character, and we sort of hit something else in the cache. So there's all these tricks where... The general phenomena here is... I think it's also super useful for RL is... maybe a single sample from the model isn't very good. But if you predict like 10 different things, it turns out that one of the 10, that's right, is the probability is much higher.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4028.946

There's these passive key curves. And, you know, part of RL, like what RL does is you can exploit this pass at k phenomena to make many different predictions. And one way to think about this, the model sort of knows internally, has some uncertainty over which of the k things is correct, or which of the k things does the human want.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4051.161

So when we RL our cursor tab model, one of the things we're doing is we're predicting which of the hundred different suggestions the model produces is more amenable for humans? Like, which of them do humans more like than other things? Maybe, like, there's something where the model can predict very far ahead versus, like, a little bit and maybe somewhere in the middle and...

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4079.019

And then you can give a reward to the things that humans would like more and sort of punish the things that it won't like and sort of then train the model to output the suggestions that humans would like more. You have these like RL loops that are very useful that exploit these passive K-curves. Oman maybe can go into even more detail.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4246.185

But MLA is from this company called DeepSeek. It's quite an interesting algorithm. Maybe the key idea is sort of in both MQA and in other places, what you're doing is you're sort of reducing the number of KV heads. The advantage you get from that is

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4268.613

there's less of them, but maybe the theory is that you actually want a lot of different, like you want each of the keys and values to actually be different. So one way to reduce the size is you keep one big shared vector for all the keys and values. And then you have smaller vectors for every single token, so that you can store only the smaller thing. There's some sort of low-rank reduction.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4296.443

And the low-rank reduction, at the end of the time, when you eventually want to compute the final thing, remember that you're memory bound, which means that you still have some compute left that you can use for these things. So if you can expand the

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4310.414

um the latent vector back out and and somehow like this is far more efficient because just like you're reducing like for example maybe like reducing like 32 or something like the size of the vector that you're keeping yeah there's perhaps some richness in having a separate uh set of keys and values and query that kind of pairwise match up versus compressing that all into one and that interaction at least

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4378.046

But it also allows you to make your prompt bigger for certain.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4805.017

Let's opine on bug finding.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4911.816

To be clear, I think they sort of understand code really well. While they're being pre-trained, the representation that's being built up, almost certainly somewhere in the stream, the model knows that maybe there's something sketchy going on. It sort of has some sketchiness, but actually eliciting the sketchiness to...

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4936.961

Part of it is that humans are really calibrated on which bugs are really important. It's not just actually saying there's something sketchy. It's like, is this sketchy trivial? Is this sketchy like you're going to take the server down? Part of it is maybe the cultural knowledge of... Like, why is a staff engineer a staff engineer?

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4955.66

A staff engineer is good because they know that three years ago, like, someone wrote a really, you know, sketchy piece of code that took the server down. And as opposed to, like... As opposed to maybe just, like, you know, you just... this thing is like an experiment. So like a few bugs are fine. Like you're just trying to experiment and get the feel of the thing.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4976.756

And so if the model gets really annoying when you're writing an experiment, that's really bad. But if you're writing something for super production, you're like writing a database, right? You're writing code in Postgres or Linux or whatever. Like your Linus Torvalds, it's sort of unacceptable to have even an edge case. And just having the calibration of like,

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5080.762

In fact, I actually think this is one of the things I learned from Arvid is, you know, sort of aesthetically, I don't like it. But I think there's certainly something where, like, it's useful for the models. And humans just forget a lot. And it's really easy to make a small mistake and cause, like,

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5350.187

You know, my hope initially is, and I can let Michael chime in too, but it was like, there's, It should, you know, first help with the stupid bugs. Like, it should very quickly catch the stupid bugs. Like, off-by-one errors, like, sometimes you write something in a comment and do it the other way. It's, like, very common. Like, I do this.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5369.753

I write, like, less than in a comment and, like, I maybe write the greater than sign or something like that. And the model is like, yeah, you look sketchy. Like, are you sure you want to do that? But eventually it should be able to catch harder bugs, too.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5729.613

I mean, there's certainly cool solutions there. There's this new API that is being developed for... It's not in AWS, but, you know, it certainly is. I think it's in PlanetScale. I don't know if PlanetScale was the first one to add it. It's this ability to sort of add branches to a database, which is...

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5750.459

Like if you're working on a feature and you want to test against a broad database, but you don't actually want to test against a broad database, you could sort of add a branch to the database. And the way to do that is to add a branch to the write-ahead log. And there's obviously a lot of technical complexity in doing it correctly. I guess database companies need new things to do.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5768.9

They have good databases now. And I think TurboBuffer, which is one of the databases we use, is going to add maybe branching to the Red Hat log. And so maybe the AI agents will use branching. They'll test against some branch, and it's sort of going to be a requirement for the database to support branching or something.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5799.329

Right. Yeah. I feel like everything needs branching. Yeah.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5810.768

I mean, there's obviously these super clever algorithms to make sure that you don't actually use a lot of space or CPU or whatever.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5910.943

I have a few friends who are super senior engineers, and one of their lines is like, it's very hard to predict where systems will break when you scale them. You can sort of try to predict in advance, but there's always something weird that's going to happen when you add this extra zero. You thought you thought through everything, but you didn't actually think through everything.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5932.74

But I think for that particular system, we've... So for concrete details, the thing we do is obviously we upload, we chunk up all of your code and then we send up sort of the code for embedding and we embed the code. And then we store the embeddings in a database, but we don't actually store any of the code. And then there's reasons around making sure that

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5960.996

We don't introduce client bugs because we're very, very paranoid about client bugs. We store much of the details on the server, like everything is sort of encrypted. So one of the technical challenges is always making sure that the local index, the local code base state is the same as the state that is on the server.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5984.395

And the way sort of technically we ended up doing that is, so for every single file, you can sort of keep this hash. And then for every folder, you can sort of keep a hash, which is the hash of all of its children. And you can sort of recursively do that until the top. And why do something complicated? One thing you could do is you could keep a hash for every file.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6007.294

Then every minute you could try to download the hashes that are on the server, figure out what are the files that don't exist on the server. Maybe you just created a new file. Maybe you just deleted a file. Maybe you checked out a new branch and try to reconcile the state between the client and the server. But that introduces, like, absolutely ginormous network overhead.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6027.494

Both on the client side, I mean, nobody really wants us to hammer their Wi-Fi all the time if you're using Cursor. But also, like, I mean, it would introduce, like, ginormous overhead in the database. I mean, it would sort of be reading this... Tens of terabytes database sort of approaching like 20 terabytes or something database like every second. That's just kind of crazy.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6054.407

You definitely don't want to do that. So what you do, you sort of, you just try to reconcile the single hash, which is at the root of the project. And then if something mismatches, then you go, you find where all the things disagree. Maybe you look at the children and see if the hashes match. And if the hashes don't match, go look at their children and so on.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6070.995

But you only do that in the scenario where things don't match. And for most people, most of the time, the hashes match.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6088.226

And I mean, the point of, like, the reason it's gotten hard is just because. Like, the number of people using it and...

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6094.051

You know, if some of your customers have really, really large code bases to the point where, you know, we originally reordered our code base, which is big, but I mean, it's just not the size of some company that's been there for 20 years and sort of has a ginormous number of files. And you sort of want to scale that across programmers.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6114.522

There's all these details where like building a simple thing is easy, but scaling it to a lot of people, like a lot of companies is obviously a difficult problem. Which is sort of independent of actually, so there's part of this scaling our current solution is also coming up with new ideas that obviously we're working on. But then scaling all of that in the last few weeks, months.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6334.803

And it's not a problem of like weaker computers. It's just that, for example, if you're some big company, you have big company code base, it's just really hard to process big company code base, even on the beefiest MacBook Pros. So even if it's not even a matter of like, if you're just like,

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6353.06

a student or something I think if you're like the best programmer at a big company you're still going to have a horrible experience if you do everything locally I mean you could you could do edge and sort of scrape by but like again it wouldn't be fun anymore

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6437.668

Don't you want the most capable model? You want Sonnet?

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6610.093

Yeah. I mean, the thing I'm just actually quite worried about is sort of the world where, I mean, so Anthropic has this responsible scaling policy and so we're, we're on like the low, low ASLs, which is the Anthropic security level or whatever, of like, of the models. But as we get to like, quote unquote, ASL three, ASL four, whatever models, which are sort of very powerful, but,

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6634.71

For mostly reasonable security reasons, you would want to monitor all the prompts. But I think that's reasonable and understandable where everyone is coming from. But Matt, it'd be really horrible if all the world's information is monitored that heavily. It's way too centralized. It's like this really fine line you're walking where On the one side, you don't want the models to go rogue.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6661.523

On the other side, it's humans. I don't know if I trust all the world's information to pass through three model providers.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

7548.874

To be clear, we have ideas. We just need to try and get something incredibly useful before we put it out there.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8085.877

Whoever prompted it. I'm actually surprisingly curious what a good bet for when AI will get the Fields Medal will be.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8099.77

I don't know what Amon's bet here is.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8104.011

Fields Medal. Oh, Fields Medal level. Fields Medal comes first, I think.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8113.958

No, sure. Like, I don't even know if I don't need to do.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8148.013

I think that's probably more likely. Like, it's probably much more likely that it'll get there. Yeah, yeah, yeah. Well, I think it goes to, like, I don't know, like, BSD, which is a bird's-wing-turned-diode conjecture, or, like, Riemann iPods, or any one of these, like, hard, hard math problems that are just, like, actually really hard.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

815.812

Yeah, you can sort of iterate and fix it. I mean, the other underrated part of Copilot for me sort of was just the first real AI product. So the first language model consumer product.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8164.843

It's sort of unclear what the path to get even a solution looks like. Like, we don't even know what a path looks like, let alone...

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8184.919

I mean, I'd be very happy. I'd be very happy. But I don't know if I, I think 2028, 2030. What feels metal? Feels metal. All right.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8483.8

I mean, isn't the answer like really simple. You just, you just try to get as much compute as possible. Like, like at the end of the day, all you need to buy is the GPUs. And then the researchers can find, find all the, all like they can sort of, you know, you can tune whether you want to pre-train a big model or a small model. Like,

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8508.58

I'm more privy to Arvid's belief that we're sort of idea limited, but there's always... But if you have a lot of compute, you can run a lot of experiments.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8634.005

I mean, I think if you see a clear path to improvement, you should always sort of take the low-hanging fruit first, right? And I think probably OpenAI and all the other labs did the right thing to pick off the low-hanging fruit, where the low-hanging fruit is like sort of... You could scale up to a GPT 4.25 scale and you just keep scaling and things keep getting better.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8661.872

There's no point of experimenting with new ideas when everything is working. And you should sort of bang on and try to get as much juice out of it as possible. And then maybe when you really need new ideas for... I think if you're spending $10 trillion, you probably want to spend some... Then actually re-evaluate your ideas. Probably your idea limited at that point.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

872.111

It's bigger and better, but predictably better. That's another topic of conversation.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

9308.795

that person in the team loves to curse a tab more than anybody else.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

9343.974

It's sort of higher bandwidth. The communication to the computer just becomes higher and higher bandwidth as opposed to just typing is much lower bandwidth than communicating intent.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

986.224

There's one that I distinctly remember. So my roommate is an IML Gold winner, and there's a competition in the U.S. called the Putnam, which is sort of the IML for college people, and it's this math competition. He's exceptionally good. So Sheng Tong and Aman, I remember, it's sort of June of 2022.