Menu
Sign In Pricing Add Podcast

Arvid Lunnemark

Appearances

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1056.467

That's what the- Well, it was technically not.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1392.096

Yeah, I think one thing that I think helps us is that we're sort of doing it all in one, where we're developing the UX and the way you interact with the model at the same time as we're developing how we actually make the model give better answers. So we're like... how you build up the prompt or like how do you find the context and for a cursor tab, like how do you train the model?

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1416.684

So I think that helps us to have all of it like sort of like the same people working on the entire experience end-to-end.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1801.404

The full generalization is like next action prediction. Sometimes you need to run a command in the terminal, and it should be able to suggest the command based on the code that you wrote to. Or sometimes you actually need to... Like it suggests something, but it's hard for you to know if it's correct because you actually need some more information to learn.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

1826.024

Like you need to know the type to be able to verify that it's correct. And so maybe it should actually take you to a place that's like the definition of something and then take you back so that you have all the requisite knowledge to be able to accept the next completion.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2073.383

I am personally very excited for... making a lot of improvements in this area. We often talk about it as the verification problem, where these diffs are great for small edits. For large edits, or when it's multiple files or something, it's actually a little bit prohibitive to review these diffs. And so there are a couple of different ideas here.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2103.14

One idea that we have is, okay, parts of the diffs are important. They have a lot of information. And then parts of the diff are just very low entropy. They're the same thing over and over again. And so maybe you can highlight the important pieces and then gray out the not so important pieces. Or maybe you can have a model that

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2124.038

looks at the diff and sees, oh, there's a likely bug here, I will mark this with a little red squiggly and say, you should probably review this part of the diff. And ideas in that vein, I think, are exciting.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2151.727

Yeah, and you want an intelligent model to do it. Like currently, diff algorithms are, they're like... like they're just like normal algorithms. There is no intelligence. There's like intelligence that went into designing the algorithm, but then there's no, like, you don't care if it's about this thing or this thing, as you want a model to do this.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2289.519

Just one idea there is I think ordering matters. Generally, when you review a PR, you have this list of files and you're reviewing them from top to bottom. But actually, you actually want to understand this part first, because that came logically first. And then you want to understand the next part. And you don't want to have to figure out that yourself.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2309.101

You want a model to guide you through the thing.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2318.537

I think sometimes. I don't think it's going to be the case that all of programming will be natural language. And the reason for that is, you know, if I'm pair programming with Swala and Swala is at the computer and the keyboard. And sometimes, if I'm driving, I want to say to Swallow, hey, implement this function. And that works.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2339.446

And then sometimes it's just so annoying to explain to Swallow what I want him to do. And so I actually take over the keyboard and I show him. I write part of the example. And then... it makes sense. And that's the easiest way to communicate. And so I think that's also the case for AI.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2356.072

Sometimes the easiest way to communicate with the AI will be to show an example, and then it goes and does the thing everywhere else. Or sometimes if you're making a website, for example, the easiest way to show to the AI what you want is not to tell it what to do, but drag things around or draw things. And Yeah.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

2373.918

And like maybe eventually we will get to like brain machine interfaces or whatever and kind of like understand what you're thinking. And so I think natural language will have a place. I think it will not definitely not be the way most people program most of the time.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3137.15

Yeah, I think it depends on which model you're using. And all of them are slightly different and they respond differently to different prompts. But I think the original GPT-4 and the original sort of pre-double models last year, they were quite sensitive to the prompts. And they also had a very small context window.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3160.004

And so we have all of these pieces of information around the code base that would maybe be relevant in the prompt. Like you have the docs, you have the files that you add, you have the conversation history. And then there's a problem like how do you decide what you actually put in the prompt and when you have a limited space.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3177.047

And even for today's models, even when you have long context, filling out the entire context window means that it's slower. It means that sometimes the model actually gets confused and some models get more confused than others. And we have this one system internally that we call pre-empt, which helps us with that a little bit.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3196.418

And I think it was built for the era before where we had 8,000 token context windows. And it's a little bit similar to when you're making a website. You want it to work on mobile. You want it to work on a desktop screen. And you have this dynamic information, which you don't have, for example, if you're designing a print magazine. You know exactly where you can put stuff.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3228.483

But when you have a website or when you have a prompt, you have these inputs. And then you need to format them to always work. Even if the input is really big, then you might have to cut something down. And so the idea was, okay, let's take some inspiration. What's the best way to design websites?

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3243.814

Well, the thing that we really like is React and the declarative approach where you use JSX in JavaScript, and then you declare, this is what I want, and I think this has higher priority, or this has higher z-index than something else. And then you have this rendering engine. In web design, it's like Chrome, and in our case, it's a preempt renderer, which then fits everything onto the page.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3273.534

And as you clearly decide what you want, and then it figures out what you want. And so we have found that to be quite helpful. And I think the role of it has sort of shifted over time, where initially it was to fit to these small context windows. Now it's really useful because it helps us with...

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3292.302

splitting up the data that goes into the prompt and the actual rendering of it and so it's easier to debug because you can change the rendering of the prompt and then try it on old prompts because you have the raw data that went into their prompt and then you can see did my change actually improve it for for like this entire eval set so do you literally prompt with jsx Yes, yes.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3315.618

So it kind of looks like React. There are components. We have one component that's a file component, and it takes in the cursor. Usually there's one line where the cursor is in your file, and that's probably the most important line because that's the one you're looking at. And so then you can give priorities.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3331.288

So that line has the highest priority, and then you subtract one for every line that is farther away. And then eventually when it's rendered, it figures out how many lines can actually fit, and it centers around that thing.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3369.839

I think our goal is kind of that you should just do whatever is the most natural thing for you. And then we, our job is to figure out how do we actually like retrieve the relative event thing so that your thing actually makes sense.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3565.438

We think agents are really, really cool. I think agents is like... It's like it resembles sort of like a human. It's sort of like you can kind of feel that you're getting closer to AGI because you see a demo where it acts as a human would. And it's really, really cool. I think... agents are not yet super useful for many things. I think we're getting close to where they will actually be useful.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3597.204

And so I think there are certain types of tasks where having an agent would be really nice. I would love to have an agent. For example, we have a bug where you sometimes can't command C and command V inside our chat input box, and that's a task that's super well specified. I just want to say in two sentences, this does not work, please fix it.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3620.788

And then I would love to have an agent that just goes off, does it, and then a day later I come back and I review the thing.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3631.818

Yeah, it finds the right files, it tries to reproduce the bug, it fixes the bug, and then it verifies that it's correct. And this could be a process that takes a long time. And so I think I would love to have that. And then I think a lot of programming, there is often this belief that agents will take over all of programming.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3652.555

I don't think we think that that's the case because a lot of programming, a lot of the value is in iterating or you don't actually want to specify something upfront because you don't really know what you want until you've seen an initial version and then you want to iterate on that and then you provide more information.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3669.467

And so for a lot of programming, I think you actually want a system that's instant that gives you an initial version instantly back and then you can iterate super, super quickly.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3696.264

I think so. I think that would be really cool. For certain types of programming, it would be really cool.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3704.113

Yeah, we aren't actively working on it right now. But it's definitely like, we want to make the programmer's life easier and more fun. And some things are just really tedious and you need to go through a bunch of steps and you want to delegate that to an agent. And then some things you can actually have an agent in the background while you're working.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3724.176

Like, let's say you have a PR that's both backend and frontend, and you're working in the frontend, and then you can have a background agent that does some work and figure out kind of what you're doing. And then when you get to the backend part of your PR, then you have some like initial piece of code that you can iterate on. And so that would also be really cool.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

3768.239

It's a pain. It's a pain that we're feeling and we're working on fixing it.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4226.818

And then there is MLA.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4404.877

So to be clear, we want there to be a lot of stuff happening in the background, and we're experimenting with a lot of things. Right now, we don't have much of that happening, other than the cache warming or figuring out the right context that goes into your command key prompts, for example. But the idea is, if you can actually spend computation in the background, then you can help...

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4429.74

help the user maybe at a slightly longer time horizon than just predicting the next few lines that you're going to make. But actually, in the next 10 minutes, what are you going to make? And by doing it in the background, you can spend more computation doing that. And so the idea of the shadow workspace that we implemented, and we use it internally for experiments, is that

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4454.033

to actually get advantage of doing stuff in the background, you want some kind of feedback signal to give back to the model. Because otherwise, like you can get higher performance by just letting the model think for longer. And so like O1 is a good example of that. But another way you can improve performance is by letting the model... iterate and get feedback.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4474.821

And so one very important piece of feedback when you're a programmer is the language server, which is this thing that exists for most different languages, and there's like a separate language server per language. And it can tell you, you know, you're using the wrong type here, and then gives you an error, or it can allow you to go to definition and sort of understands the structure of your code.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4498.266

So language servers are extensions developed by, like there's a TypeScript language server developed by the TypeScript people, a Rust language server developed by the Rust people, and then they all interface over the language server protocol to VS Code.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4509.852

So that VS Code doesn't need to have all of the different languages built into VS Code, but rather you can use the existing compiler infrastructure.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4519.256

It's for linting. It's for going to definition and for like seeing the right types that you're using.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4527.788

Yes, type checking and going to references. And that's like, when you're working in a big project, you kind of need that. If you don't have that, it's like really hard to code in a big project.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4546.809

So it's being used in Cursor to show to the programmer, just like in VS Code. But then the idea is you want to show that same information to the models, the IOM models. And you want to do that in a way that doesn't affect the user because you want to do it in background. And so the idea behind the shadow workspace was, okay, like one way we can do this is

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4568.108

we spawn a separate window of Cursor that's hidden. And so you can set this flag and Electron is hidden. There is a window, but you don't actually see it. And inside of this window, the AI agents can modify code however they want, as long as they don't save it because it's still the same folder, and then can get feedback from the linters and go to definition and iterate on their code.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4596.944

So that's the eventual version. Okay. That's what you want. And a lot of the blog post is actually about how do you make that happen? Because it's a little bit tricky. You want it to be on the user's machine so that it exactly mirrors the user's environment. And then on Linux, you can do this cool thing where you can actually mirror the file system and have the...

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4619.779

AI make changes to the files and it thinks that it's operating on the file level, but actually that's stored in memory and you can create this kernel extension to make it work. Whereas on Mac and Windows it's a little bit more difficult, but it's a fun technical problem so that's why.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

4817.722

Even the smartest models.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5023.382

And all caps repeated 10 times.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5049.309

Yeah, and I think that one is also partially also for today's AI models, where if you actually write dangerous, dangerous, dangerous in every single line, the models will pay more attention to that and will be more likely to find bugs in that region.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5074.679

Yeah, I mean, it's controversial. Some people think it's ugly. Swallowed does not like it.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5135.096

Until we have formal verification for everything, then you can do whatever you want and you know for certain that you have not introduced a bug if the proof passed.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5148.402

I think people will just not write tests anymore. And the model will suggest, like you write a function, the model will suggest a spec and you review the spec. And in the meantime, smart reasoning model computes a proof that the implementation follows the spec. And I think that happens for most functions.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5184.754

Like you think that spec is hard to generate?

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5201.351

But then also... Even if you have the spec? If you have the spec. But how do you map the spec?

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5208.427

No, the spec would be formal.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5227.473

Yeah, yeah. I think you can probably also evolve the spec languages to capture some of the things that they don't really capture right now. I don't know. I think it's very exciting.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5246.498

I think entire code bases is harder, but that is what I would love to have. And I think it should be possible. Because you can even... There's a lot of work recently where you can prove... formally verified down to the hardware. So you formally verify the C code, and then you formally verify through the GCC compiler, and then through the Verilog down to the hardware.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5269.552

And that's an incredibly big system, but it actually works. And I think big code bases are sort of similar in that they're a multi-layered system. And if you can decompose it and formally verify each part, then I think it should be possible. I think the specification problem is a real problem, but... How do you handle side effects?

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5313.969

I think it feels possible that you could actually prove that a language model is aligned, for example. Or like you can prove that it actually gives the right answer. That's the dream.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5418.92

Yeah. And then how do you actually do this? Like we have had a lot of contentious dinner discussions of how do you actually train a bug model? But one very popular idea is, you know, it's kind of potentially easy to introduce a bug than actually finding the bug. And so you can train a model to introduce bugs in existing code.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5436.313

And then you can train a reverse bug model then that can find bugs using this synthetic data. So that's like one example, but yeah, there are lots of ideas for how to do this.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5580.121

Yeah, it's a controversial idea inside the company. I think it sort of depends on how much you believe in humanity, almost. I think it would be really cool if you spend nothing to try to find a bug, and if it doesn't find a bug, you spend $0. And then if it does find a bug and you click Accept, then it also shows in parentheses $1. And so you spend $1 to accept the bug.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5606.866

And then, of course, there's the worry like, okay, we spent a lot of computation. Maybe people will just copy-paste. I think that's a worry. And then there is also the worry that introducing money into the product makes it kind of...

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5620.725

you know like it doesn't feel as fun anymore like you have to like think about money and and you all you want to think about is like the code and so maybe it actually makes more sense to separate it out and like you pay some fee like every month and then you get all of these things for free but there could be a tipping component which is not like it yes but it still has that like dollar symbol i think it's fine but i i also see the point where like maybe you don't want to introduce it

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5831.403

AWS is just really, really good. It's really good. Whenever you use an AWS product, you just know that it's going to work. It might be absolute hell to go through the steps to set it up.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

5850.583

because it's just so good it doesn't need the nature of winning i think it's exactly it's just nature they're winning yeah yeah but aws you can always trust like it will always work and if there is a problem it's probably your problem uh yeah okay is there some interesting like challenges to you guys a pretty new startup to get scaling to like to so many people and

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6211.908

I think the most obvious one is just you want to find out where something is happening in your large code base. And you sort of have a fuzzy memory of, okay, I want to find the place where we do X. But you don't exactly know what to search for in a normal text search.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6229.858

And so you ask a chat, you hit command enter to ask with the codebase chat, and then very often it finds the right place that you were thinking of.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6285.245

Yeah, we thought about it, and I think it would be cool to do it locally. I think it's just really hard. And one thing to keep in mind is that some of our users use the latest MacBook Pro, but most of our users, like more than 80% of our users, are in Windows machines, and many of them are not very powerful. And so... local models really only works on the latest computers.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6310.656

And it's also a big overhead to build that in. And so even if we would like to do that, it's currently not something that we are able to focus on. And I think there are some people that do that, and I think that's great. But especially as models get bigger and bigger and you want to do fancier things with like bigger models, it becomes even harder to do it locally.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

646.01

Yes, that is very important. That is very important. And it's actually sort of an underrated aspect of how we decide what to build. Like a lot of the things that we build and then we try them out, we do an experiment and then we actually throw them out because they're not fun. And so a big part of being fun is like being fast a lot of the time. Fast is fun.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6466.377

There's actually an alternative to local models that I am particularly fond of. I think it's still very much in the research stage, but you could imagine to do homomorphic encryption for language model inference. So you encrypt your input on your local machine, then you send that up, and then the server can use loss of computation.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6489.3

They can run models that you cannot run locally on this encrypted data, but they cannot see what the data is. And then they send back the answer and you decrypt the answer and only you can see the answer. So I think that's still very much research and all of it is about trying to make the overhead lower because right now the overhead is really big.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6509.139

But if you can make that happen, I think that would be really, really cool. And I think it would be really, really impactful. Because I think one thing that's actually kind of worrisome is that as these models get better and better, they're going to become more and more economically useful. And so more and more of the world's information and data will flow through one or two centralized actors.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6532.696

And then there are worries about there can be traditional hacker attempts, but it also creates this kind of scary part where If all of the world's information is flowing through one node in plain text, you can have surveillance in very bad ways. And sometimes that will happen for, you know, initially will be like good reasons.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6556.149

Like people will want to try to protect against like bad actors using AI models in bad ways. And then you will add in some surveillance code and then someone else will come in and, you know, you're on a slippery slope and then you start... doing bad things with a lot of the world's data. And so I'm very hopeful that we can solve homomorphic encryption for language model inference.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6673.987

Because I think a lot of this data would never have gone to the cloud providers in the first place. Where... This is often like you want to give more data to the EIA models. You want to give personal data that you would never have put online in the first place to these companies or to these models.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

6697.418

And it also centralizes control where right now for cloud, you can often use your own encryption keys and it just can't really do much. but here it's just centralized actors that see the exact plain text of everything.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

7621.423

I saw a time to shut down cursor. Time to shut down cursor, thank you.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

801.499

And I think actually one of the underrated aspects of GitHub Copilot is that even when it's wrong, it's like a little bit annoying, but it's not that bad because you just type another character and then maybe then it gets you or you type another character and then it gets you. So even when it's wrong, it's not that bad.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8110.575

But it's also this, like, isolated system in Verify.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8172.648

And you don't buy the idea that this is like an isolated system and you can actually, you have a good reward system and it feels like it's easier to train for that.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

8525.014

I would, but I do believe that we are limited in terms of ideas that we have.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

9041.536

I agree. I'm very excited to be able to change. One thing that happened recently was we wanted to do a relatively big migration to our codebase. We were using async local storage in Node.js, which is known to be not very performant, and we wanted to migrate to our context object. And this is a big migration and affects the entire codebase.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

9064.022

And Swal and I spent, I don't know, five days working through this, even with today's AI tools. And I am really excited for a future where I can just show a couple of examples and then the AI applies that to all of the locations. And then it highlights, oh, this is a new example. Like, what should I do? And then I show exactly what to do there. And then that can be done in like 10 minutes.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

9089.203

And then you can iterate much, much faster. Then you can... then you don't have to think as much upfront and stand at the blackboard and think exactly like, how are we going to do this? Because the cost is so high. But you can just try something first and you realize, oh, this is not actually exactly what I want. And then you can change it instantly again after.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

9107.178

And so, yeah, I think being a programmer in the future is going to be a lot of fun.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

9312.118

Yeah. And it's also not just like, like pressing tab is like the, just pressing tab. That's like the easy way to say it. And the, the catch catchphrase, you know, uh, but what you're actually doing when you're pressing tab is that you're, you're injecting intent, uh, all the time while you're doing it. You're, you're, uh, sometimes you're rejecting it. Sometimes you're typing a few more characters.

Lex Fridman Podcast

#447 – Cursor Team: Future of Programming with AI

9331.774

Um, and, and that's the way that you're, um, you're sort of shaping the things that's being created. And I think programming will change a lot to just what is it that you want to make?