David Shu
Appearances
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And I honestly don't use the Wikipedia search and haven't in a while, so it may be amazing. But I have, as a consumer, a general concept that the search systems on individual websites are not terrific. And Google is baseline decent. And so as long as I'm searching public data, I would generally prefer the Google search.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I guess in a sense, that's a less and less true statement every year because the large chunks of websites are just not public data anymore. Like you can't search Facebook with Google, can't search Instagram with it. You can't find a TikTok with it or anything like that.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And so the existence of those, I think they sometimes get called walled gardens, says that we should have more fine-tuned tools like that. And there's just a lot of similarities there. So should a startup the size of Tailscale build customized models for that for its users, I think is sort of a big open-ended question. around how the model space will evolve.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And I think my last year of working with models, fine tuning them, training them, carefully prompting them, you can do more and more just with carefully structured prompts and long contexts that you used to have to use fine tuning to achieve. But all of this, my sort of big takeaway is that they are actually extremely hard to turn into products.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
and to get those details right in a general sense for shipping to users. They're actually quite easy to get going for yourself. And I think if anything, more people should explore running models locally and playing with them because they're a ton of fun and they can be very productive very quickly.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And so they have a very high bar for internal tools. And when they first started, we were in the same YC batch, actually. We were both at Winter 17. And they were, yeah, I think maybe customer number five or something like that for us. I think DoorDash was a little bit before them, but they were pretty early.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
but in much the same way that it's really easy to write a Python script that you run yourself on your desktop versus a Python script you ship in production for users. LLMs have this huge sort of complexity gap when it comes to trying to build products for others. And so... I agree that that sort of tooling would be fun and should exist.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I also think where we are today, it's quite hard for a team the size of a startup to ship that as not part of the core product experience.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I think that is exactly the right way to frame the question for a business. And I don't know the answer to a lot of those questions. I can talk to some of the more technical costs involved. What the benefits would be to the company is extremely open-ended to me. I can't imagine a way to measure that. Based on talking to customers of Tailscale who deploy it,
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
thinking about the companies where, and so to go back to something you said earlier about how you use it and you don't pay for it, I think that's great because Tailscale has no intention of making money off individual users. That's not a major source of revenue for the company. The company's major source of revenue is corporate deployments. And there's a blog post by my co-founder Avery about a
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And the problem they had was they had so many internal tools they needed to go and build, but not enough time or engineers to go build all of them. And even if they did have the timer engineers, they wanted their engineers focused on building external facing software, because that is what would drive the business forward. Breck's mobile app, for example, is awesome.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
how the free plan stays free on our website, which sort of explains this, that individual users help bring Tailscale to companies who use it for business purposes and they fund the company's existence. So looking at those business deployments, you do see Tailscale gets rolled out initially at companies for some tiny subset of the things it could be used for.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And it often takes quite a while to roll out for more. And even if the company has a really good roadmap and a really good understanding of all of the ways they could use it, it can take a very long time to solve all of their problems with it. And that's assuming they have a really good understanding of all of the things it can do.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And the point you're making, Adam, that people often don't even realize all the great things you can do with it is true. And I'm sure a tool that helps people explore what they could do would have some effect on revenue. In terms of the technical side of it and the challenges, there are several challenges. In the very broad sense, the biggest challenge with LLMs is just the enormous amount of
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
what you might call traditional non-model engineering has to happen out the front of them to make them work well. It's surprisingly involved. I can talk to some things I've been working on over the last year to give you a sense of that. Beyond that, the second sort of big technical challenge is one of sort of Tailscale's core design principles is all of the networking is end-to-end encrypted. And
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
The main thing an LLM needs to give you insight is a source of data. And the major source of data would be what is happening on your network, what talks to what, how does it all work. And that means that any model telling you how you could change your networking layout or give you insight into what you could do would need access to data that we as a company don't have and don't want.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And so we're back to it would have to be a product you run locally. and have complete control over, which is absolutely, you know, my favorite sorts of products are that, you know, I like open source software that I can see the source code for, compile myself, run locally. That's how I like all things to be.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Trying to get there with LLMs in the state they are today is actually, I think, pretty tricky. I don't think I've seen an actually shipped product that does that really well for people. There's one. There's a developer tool that I hear a lot of good talk about that... I don't... I'm just trying to live search for it for you. Nope, that's the wrong one.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
That's Magic Shell History, which also sounds really cool. I should use that one. Is that A2N? A2N, yeah. That one's awesome. Oh, you've used it? Oh, great. I'm a daily user.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, I thought that was the only one. There's another one that is... in the sort of agent space for developers as they're writing programs. And it helps you, it's, it's like a local Claude effectively. And it's, it's primarily built around helping you construct prompts really carefully for existing open models. And I've, it's come up several times and I'm sorry, it's fallen out of my head.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
The Breck's website, for example, is awesome. The expense flow, all really, you know, really great external facing software. So they wanted their engineers focused on that as opposed to building internal CRUD UIs. And so that's why they came to us. And it was awesome. Honestly, a wonderful partnership. It has been for seven, eight years now.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I will look it up later.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
But it's, I hear very positive things about it. And that, that's the closest I've seen to sort of a shipped completely local product that does that sort of thing.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
On which models to use, I think given the state of models that exist today, open models, the major shipped open models are so amazing that it always makes sense to start with one of those models, if nothing else, as a pre-trained base for anything that's happening. Building a model from scratch is a very significant undertaking. And I don't think is necessary for most tasks.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
The available open models are extremely general purpose. And so at worst, you would be fine tuning from one of those to build a product. If you take one of the llamas or, I mean, there's a lot of talk about deep seek, which produces terrific results. It's a very large model. It'd be very hard to start with it.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Though I understand there's some very good distilled work coming from it using other models.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Today, I think Brex has probably around a thousand Retool apps they use in production. I want to say every week, which is awesome. And their whole business effectively runs now on Retool. And we are so, so privileged to be a part of their journey. And to me, I think what's really cool about all this is that we've managed to allow them to move so fast.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, I don't think your experience is unusual, actually. I think almost everyone has your experience. And for most software, I am in the same category. I try things at a very surface level when they're newish and see if there's any really obvious way they help me. And if they don't, I put them aside and come back later. A great example of that is the Git version control system.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
So whether it's launching new product lines, whether it's responding to customers faster, whatever it is, if they need an app for that, they can get an app for it in a day, which is a lot better than, you know, in six months or a year, for example, having to schlep through spreadsheets, etc. So I'm really, really proud of our partnership with Brex.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
It was 10 years before I really sat down and used it. I was using other version control systems. After 10 years, I was like, okay, this thing's probably going to stick around. I guess I'll get over its user interface. Fine. I was reluctant, but I got there in the end. LLMs really struck me as fascinating.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I made this active decision to not do that with them and set out on a process of trying to actively use them, which has involved learning just a really appalling amount, honestly. It's very reasonable that most engineers haven't done really significant things with LLMs yet because it's too much cognitive load. If you're writing computer programs, you're trying to solve a problem.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
You only have so much of your brain available for the tools you use for solving problems because you have to fit the problem in there as well and the solution you're building. And that should be most of what you're thinking about. The tools should take up as little space as possible. And right now, to use LLMs effectively, you need to know too much about them. And
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
That was my big takeaway 11 months ago or so, which is why I started working on tools with some friends to try and figure this out.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
because there has to be a way to make this easier. And my main conclusion from all of that is there's an enormous amount of traditional engineering to do in front of LLMs to get there. So the first really effective thing I saw from LLMs is the same thing I think most engineers saw, which was GitHub Copilot, which is a code completion... Oh, so actually... GitHub Copilot has taken on new meanings.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
It's more than that now, right? Yeah, it's an umbrella brand that means all sorts of products. And I honestly haven't even used most of those products at this point. The original product
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
is a code completion system built into Visual Studio Code, where as you type, it suggests a completion for the line or a few lines beyond that of where you are, which is building on a very well-established paradigm for programming editors. Visual Studio 6.0 did this recently, 25 years ago with IntelliSense for completing methods in C++. This is not a new idea.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And, you know, around the same time we had ETags for Emacs or CTags, I should say, which gave us similar things in the Unix world. And so this is sort of extending that idea by bringing out some of the knowledge of a large language model in the process of And I'm really enamored with the entire model. Copilot's original experience when it came out was magical.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
It was just like, there was nothing like this before. It was. And it really, I think, jump-started a lot of interest in the space from people who hadn't been working on it, which was almost all of us. And from my perspective, the thing that really struck me was, wow, this works really well. And wow, it makes really obvious, silly mistakes. There's both sides of this.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
It would suggest things that just don't compile in ways that are really obvious to anyone who takes a moment to read it. And it would also make really impressive cognitive leaps where it would suggest things that, yes, that is the direction I was heading, and it would have taken me several minutes to explain it to someone, and it got there immediately.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And so I spent quite a lot of time working on code completion systems with the goal of improving them by focusing on a particular programming language. And we've made some good progress there. We actually hope to demonstrate some of that publicly soon, like in the next few weeks, probably in this sketch.dev thing that we've been building.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
We'll integrate it so that people can see it and give it a try. But so those models are interesting because they're not the LLM experience that most users have. Like when everyone talks about AI today, they talk about ChatGPT or Claude or these chat-based systems. The thing I really, really like about the original Copilot code completion model is it's not a chat system.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
It's a different user interface experience for the knowledge in an LLM. And that's really a lot of fun. And in fact, the technology is a little bit different too. There's a concept in the model space called fill in the middle where a model is taught a few extra tokens that don't exist in a standard chat model. With fill in the middle, which is a lot of fun, a model is taught a few extra tokens.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And it's taught a prefix token, a suffix token, and a middle token. And what you do is you feed in as a prompt to the model the file you're in. And everything, all the characters before where the cursor is, get fed right after a prefix token. So you feed in prefix, all the characters of the file. Then you feed in suffix, and you feed in all the tokens after the cursor.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And then you feed in the middle token, and then you feed in whatever goes into the middle to complete it. And that's the prompt structure for one of these models. And then the model keeps completing the thing that you fed in. It writes the next characters. And you train the model by taking existing code.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
There's a few papers on how these models are trained because Meta published one of these models. Google published one of these models under the Gemma brand. There's a few others out there. There's one from Quinn and some other derived ones.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And you take existing code files, you choose some section of code, you mark everything before it as the prefix, everything after it as the suffix, and you fill in everything after it as the middle. And that's your training data. You generate a lot of that by taking existing code and breaking it up into these files randomly.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
by randomly inserting a cursor, then you've taught a model how to use these extra characters and how to complete them. And so it's not a chat model at all. It's sort of a sequence-to-sequence model. It's a ton of fun. And the advantage of these systems is they're very fast compared to chat models.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And that's the key to the whole code completion product experience is you want your code completion within a couple hundred milliseconds of you typing a character. Whereas if you actually time Claude or you time one of the open AI models, they're very slow. Like they take a good minute to give you a result. And there's a lot of UI tricks in hiding that minute.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
They move the text around on the screen, then they stream it in. Yeah, it's very clever.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, exactly. You can really feel it with the new reasoning models, O1, these things, because there's this pause at the beginning. It hurts.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, it is a ton of fun to watch. I agree. And it is a lot of insight into how the models work, too. Because the insides of the models are a large number of floating point numbers holding intermediate state. And it's very hard to get insight into those. But the words, you can read them. You can make some sense of them. So code completion is, I think, extremely useful to programmers.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
It varies a lot depending on what you're writing and how experienced models are with it. And just... how sort of out on the edge of programming you are. If you're really out in the weeds, the models can get less useful. I used a model for writing a big chunk of AVX assembly a few months ago. And the model was both very good at it and very bad at it simultaneously.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And it was very different from the typical asking a model to help with programming experience. it would constantly get the order operations wrong or overcomplicate things or misunderstand. It was a very different experience than typical programming.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I used all of them for that. Okay. And this is what I meant by I'm spending a lot of time actively exploring the space. I'm putting far too much work into exploring the model space as I do work.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I can't advise people to use them all. You know, that's a bunch of them. Yeah. It's a, and this I think is the big problem. And you know, You mentioned that most programmers are probably using this. As far as we can tell, not one fifth of programmers are using these tools today.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Through surveys. A couple of people have done surveys of programmers and it seems to come back that most people are not using these tools yet. Wow. Which is both shocking to me because they're so useful and also makes a lot of sense because it's a lot of work figuring out how to use them.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I'm not actually the CTO anymore. Oh no, your LinkedIn is outdated. Oh, does it still say that? I thought I had updated it.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I mean, I totally agree that it is coming for it. I also think it's very early days and a great reason to not learn this technology today. is that it's changing so fast. Yeah. And that you can spend a very long time figuring out how to make it work, and then all of that accumulated skill can be sort of made useless tomorrow by some new product.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I think my LinkedIn, it might be confusing because it still lists that I was the CTO. I stepped back from the CTO last year. Okay. So what are you doing now? I am spending my time exploring sort of new product spaces, things that can be done. So both inside and outside of Tailscale. Very cool.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, absolutely. Like a year ago, a common technique with open models that existed was to offer them money to solve problems. You start every prompt by saying, I'll give you $200 if you do this. And it greatly improved outcomes.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
All of those techniques are gone now. Yeah. Like if you try bribing a model, it doesn't help. Yeah. There was a great example I saw of that where someone kept saying, I'll give you $200 if you do this. And they did it in a single prompt several times. And they got to the nth case and it said, but you haven't paid me for the previous ones yet.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
No means no. That's great. No money means no. Yeah, there you go. All right. They're very funny models. So I spent a long time believing, and I actually, I still believe this in the long term, that chat,
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
is our is currently our primary user interface on the models and it's not the best interface for most things the way to get the most value out of models today when you program is to have conversations with models about what you're writing and that's i i think it's it's quite the mode shift to do that it's quite taxing to do that and it feels like uh
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
it feels like a user interface problem that hasn't been solved yet and so uh i've been working a lot with uh josh bleacher snyder on these things uh and we uh have been sort of we spent a long time looking for how can we avoid the chat paradigm and make use of models that's why code completion initially was so interesting because it's an example of using models without chat and it's very effective
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
So we spent a long time exploring this. To give you another example of something we built in this space, because we've just been trying to build things to see what's actually useful, we built something called Merd, merd.ai, which I think we put up a few weeks ago. And it does merge commits for you. So if you try and push a git commit or do a rebase and you get a merge conflict,
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Most of my work inside Tailscale is around helping on the sort of customer side, talking to users or potential users about how it can be useful. And then because I have such an interest in sort of the world of large language models, I've been exploring that. But that is not a particularly good fit for the Tailscale product.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
you can actually use LLMs to generate sophisticated merge commits for you. It turns out that's a much harder problem than it looks. Like you would think you just paste in all of the files to the prompt and you ask it to generate the correct files for you. Even the frontier models are all really bad at this. You almost never get a good merge commit out of them.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
But with a whole stack of really mundane engineering out the front, Mundane's not the right word because a lot of it's actually really very sophisticated. But it doesn't involve the LLM itself. It's about carefully constructing the prompt. Traditional is a much better word. You can actually get very good merge commits out of it.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And that user experience seems much better for programmers to me that you could imagine that being integrated into your workflows to the point where you send a PR, there's a merge conflict, it proposes a fix right on the PR for you.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And in fact, we attempted a version of that where there's a little Git bot that you can at mention on a PR and it sort of generates another PR based on it that fixes the merge conflict for you. And that sort of experience doesn't require the chat interface to be exposed to the programmer to make use of the intelligence in the model.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And that is where I dream of developer tools getting so that everyone can use them without having to learn a lot about them. You shouldn't have to learn all the tricks for convincing a model to write a merge commit for you. There should be a button. Or not even a button. It should just do it when GitHub says there's a merge conflict. And so it works pretty well.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
We've seen it generate some very sophisticated merge commits for us. We'd love to see more people give it a try and let us know what the state of that is. But so... Just because that is such a hard state to get to, we built Sketch, which exposes the traditional chat interface in the process of writing code.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Because we don't think the models are at a point yet where we can completely get away from chat being part of the developer's workflow.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Right. Yes, I think that lines up really well with the way Josh and I think about these things. Where today, if you open up a model, a cloud provider's frontier model or a local deep seek or even a Lama 70B, you can ask it to write a Python script that does something.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I spent quite a long time looking for ways to use this technology inside Tailscale and it doesn't really fit. And I actually think that's a good thing. It's really nice to find clear lines like that when you find something where it's not particularly useful. And I wouldn't want to try and, you know, a lot of companies are attempting to make things work, even if they don't quite make sense.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
could be a python script to go to the github api and grab some data and present it neatly for you and it will do a great job it you know these great models can basically do this in a single shot where you write a sentence and out comes a python script that solves a problem and like that's that's an astonishing technical achievement i really it's amazing how quickly i've got used to that as a thing but
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Exactly. Like me five years ago, if you told me that, I would struggle to believe it. And yet now I just take it for granted. Yes. And so that works. We've got that. We've got a thing that can write really basic Python scripts for us. Similarly, these systems, at least the frontier models, are good at writing a small React component for you.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
You can give almost any of them like, you need more than a single sentence. You need just a few sentences to structure the React component. But out comes some HTML and some JavaScript in the React syntax, the JSX syntax or the TSX syntax. And it's pretty close. You know, it might need some tweaking. You might have some back and forth to get there, but you can get about that out of it.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Clearly, models are going to improve. There's no evidence to suggest we're at the limit here, as the models keep improving every month at this rate. And part of what we're interested in Sketch is getting beyond helping you write a function, which I also use today. I get Frontier models to write functions for me. How can we climb the complexity ladder there?
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And so the point we chose is a point that is comfortable for us and what is helpful for us is the Go package. How can we get a model to help us build a Go package to solve a problem? And there's an implicit assumption here in that the shape of Go packages looks slightly different at the end of this.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Packages are a little bit smaller and you have a few more of them than you would in a sort of traditional Go program you wrote by hand. But I don't think that is necessarily a bad thing. Honestly, my own programming, as a Go programmer, I tend to write larger packages because there's a lot of extra work involved in me breaking it into smaller packages.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And there's often this thought process going on in my mind of like, oh, in the future, this would be more maintainable as more packages. But it's more work for me to get there today. So I'll combine it all now and maybe refactor it another day. Yeah. And switching to trying to have LLMs write significant chunks of packages for you makes you do that work upfront. That's not necessarily a bad thing.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
It's perhaps more the way we'd like our code to end up. And so Sketch is about taking an LLM and plugging a lot of the tooling for Go into the process of using the LLM to help it. So an example is I asked it the other day to write some middleware to broadly compress HTTP responses under certain circumstances.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
uh because chrome can handle broadly encoding and it's very efficient uh it's not in the standard library at least it wasn't the last time i looked uh and the first thing it did was it included a third-party package that andy had written that has a broadly encoder in it and so sketch go gets that in the background in a little container as you're working
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And I think it's very sensible of Tailskill to not go in that direction.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
and has a little Go mod there that modifies so that as you're editing the code, you get all the code completions from that module, just like you would in a programming environment. And more importantly, we can take that information and feed it into the model as it's working.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
If we run the Go build system as part of it, and if a build error appears, we can take the build error, feed it into the model. It's like, here's the error, and we can let it ask questions about the third-party package it included.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
which helps with some of the classic problems you see when you ask Claude to write you some Go code, where it includes a package and then makes up a method in there that doesn't exist that you really wish existed, because it would solve your problem. And so this sort of automated tool feedback is doing a lot of the work I have to do manually when I use a frontier model.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And so I'm trying to cut out some of those intermediate steps where I said, that doesn't exist. Could you do it this way? Anything like that you can automate saves me time. It means I have to chat less. And so that's the goal is to slightly climb the complexity ladder in the piece of software we get out of a frontier model and to chat less in the process.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Today, it is almost entirely prompt-driven. There's actually more than one model in use under the hood as we try different things. For example, we use a different model for solving the problem of... If we want to go get a package, what module do we get to do that? Which sounds like a mechanical process, but it actually isn't. There's a couple of steps there. So a model helps out with that.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
So what would Tailscale do with LLMs is the question I was asking from a Tailscale perspective. I think Tailscale is extremely useful for running LLMs yourself for a network backplane. In particular because of the sort of surprising nature of the network traffic associated with LLMs On the inference side, so you can kind of think about working with models from both a training and an inference.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
There's very different sorts of prompts you use for trying to come up with the name of a sketch than there are for answering questions. But at the moment, it's entirely prompt-driven. in the sense that a large context window and a lot of careful context construction can handle this, can improve things. And that can include a lot of tool use.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Tool use is a very fun feature of models where you can instruct. So to back up and give you a sense of how the models work, an LLM generates the next token based on all the tokens that come before it. When you're in chat mode and you're chatting with a model, you can at any point stop and have the model generate the next token. It could be part of the thing you're asking it or its response.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
That meta information about who is talking is sort of built into just a stream of tokens. So similarly, you can... You can define a tool that a model can call. You can say, here's a function that you can call and it will have a result. And the model can output the specialized token that says call this function, give it a name, write some parameters.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And then instead of the model generating the next token, you pause the stream. You, the caller, go and run some code. You go and run that function call that it defined. paste the result of that function call in as the next set of tokens, and then ask the model to generate the token after it. So that technique is a great way to have automated feedback into the model.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
So a classic example is a weather function. And so you define a function which says current weather. the model, then you can ask the model, hey, what's the weather? And the model can say, call function current weather. Your software that's printing out the tokens pauses, calls current weather, says sunny. You paste sunny in there.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And then the model generates the next set of tokens, which is the chat response saying, oh, it's currently sunny. And that's the sort of easy way to plug external systems into a model. This is going on under the hood of the user interfaces you use onto frontier models. So this is happening in ChatGPT and Claude and all these systems. Sometimes they show it to you happening, which is how you know.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
You see it less now, but about six months ago, you could see in the GPT-4 model, you would ask it questions and it would generate Python programs and run them and then use the output of the Python program in its answer. I had a really fun one where I asked it, how many transistors fit on the head of a pin? And it started producing an answer. And it said, well, transistors are about this big.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Pins are about this big. And so I guess, a magic little emoji appeared, that this means this many transistors fit on the head of a pin, some very large number. And if you click on the emoji, it shows you the Python program it generated to do the arithmetic.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
It executed that as a function call, came back with a result, and that saved it the trouble of trying to do the arithmetic itself, which LLMs notoriously struggle with doing arithmetic. This is a great thing to outsource to a program.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yes, they're very good at writing programs to do the arithmetic and very bad at doing the arithmetic. So it's a great compromise. The thing we do with Sketch is try to give the underlying model access to information about the environment it's writing code in using function calls. So a lot of our work is not fine-tuning the model.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
It's about letting it ask questions about not just the standard library, but the other libraries it's trying to use so that it can get better answers. It can look up the Go doc for a method, if it thinks it wants to call it, use that as part of its decision-making process about the code it generates.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
So at the beginning in your system prompt or something like your system prompt, depends on the API on exactly how the model works, you say there is a function call, which is get method docs. And it has a parameter, which is name of method.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And in the middle of... And then you can ask a... You can construct a question to an LLM that says, generate a program that does this with the system prompt, which explains that there's a tool call there. And so as you're... LLM is generating that program, it can pause, make a system call, make a tool call that says, get me the docs for this.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And so the LLM decides that it wants to know something about that method call, and then you go and run a program which gets the result, gets the documentation for that method from the actual source of truth. You paste it into the prompt, and then the LLM continues writing the program. Using that documentation as now part of its prompt.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
These are sort of two sides of the coin here. And training is very, very data heavy and is usually done on extremely high bandwidth, low latency networks, InfiniBand style setups on clusters of machines in a single room. or if they're spread beyond the room, the next room is literally in the building next door. The inference side looks very different.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And so this is the model driving the questions about what it wants to know about. And just blocks and waits for that to come back.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
If you step back to like running Lama CPP yourself or something like this, you can sort of oversimplify one of these models as Every time you want to generate a token, you hand the entire history of the conversation you've had or whatever the text is before it to the GPU to build the state of the model. And then it generates the next token.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
It actually generates a probability value for every token in its token set. And then the CPU picks the next token, attaches it to the full set of tokens, and then does that whole process again of sending over the entire conversation and then generating the next token. And so if you think about that sort of that very long, big, giant for loop around the outside of every time there is a.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
there's a new token, the token is chosen from the set of probabilities that comes back, is added to the set, and then a new set of probabilities is generated for the next token.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
You can imagine in the middle of that for loop having some very traditional code in there that inserts a stack of tokens that wasn't actually decided by the LLM, but then become part of the history that the LLM is generating the next token from. And so that's how those embeds work. You can effectively...
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
have the LLM communicate with the outside world in the middle there by it driving that or you don't even have to have it drive it. You could have software outside the LLM that looks at the token set as it's appeared and then insert more tokens for it. So this is all the fun stuff you can do by running these models yourself. Yeah, I know. It's so fun.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, that's a really good question. The best programming language for LLMs today is Python, and I believe that is a historical artifact of the fact that all of the researchers working on generative models work in Python. And so they spend the most time testing it with Python and judging a model's results by Python output.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
There was a great example of this in one of the open benchmarks I looked at. And I believe this has all been corrected since then. This is all about a year old. There was a multi-language benchmark that tested how good a model is across multiple languages. And I opened up the source set for it.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
There's very little network traffic involved in doing inference on models in terms of bandwidth. And the layout of the network is surprisingly messy. This is because the nature of finding GPUs is tricky even still today. despite the fact that this has been a thing for years now. Very tricky. Yeah.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
and looked at some of the Go code, because I'm a Go programmer, and it had been machine translated from Python so that all of the variable names in this Go code used underscores instead of camel case. And, you know, the models were getting a certain percentage success rate generating these results.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
So Josh went through actually and made these more idiomatic in the go style of using camel case and, you know, putting everything in the right place. And the model gave much better results on this benchmark. And so that's an example of where languages beyond the basic ones that the developers of the models care about are not being paid as much attention to as what you would like.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And things are getting a lot better there. The models are much more sophisticated. The teams building them are much larger. They care about a larger set of languages. And so I don't think it's all as Python-centric as it used to be. But that is still very much the first and most important of the languages. As for how well Go works, it seems to work pretty well.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Models are good at it by our benchmarks. Like we said, if we took the benchmarks and made them more Go-like, the models actually got better results. They have a real tendency to understand the language. We think it's a pretty good fit. There are definitely...
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
There are definitely times when models struggle, but it's a garbage-collected language, which helps, because in just the same way that garbage collection reduces the cognitive load for programmers as they're writing programs, it reduces the load on the LLM in just the same way.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
They don't have to track the state of memory and when to free it, so they have a bit more thinking time to worry about solving your problem. So in that way, it's a good language. It's not too syntax heavy, but it's also, it doesn't have ambiguities that humans struggle with. Yeah. It seems to work well. Yeah. There aren't a lot of.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I haven't seen much research into what is the best language for an LLM. It does seem like an eminently testable thing. Like there's some interesting, in fact, it may end up influencing programming language design in a sense of imagine you are building a new programming language.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
and you develop a training set that's automatically generated based on translating some existing programs into your language, and you train models for it, you could imagine tweaking the syntax of your new language, regenerating the training set, and then seeing if your benchmarks improve or not.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
So you can imagine driving readability of programming languages based on your ability to train an LLM to write this language. And so there's lots of really fun things that will happen long term that I don't think anyone started on work like that yet.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, it's a good question. It's all of the techniques we're applying at General, but they are all very... Each technique requires a lot of Go-specific implementation. So in much the same way that a lot of the techniques inside a language server for a programming language, these are the systems inside VS Code for generating information about programming languages.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
The techniques are general for what methods are available on this object. very similar in Go as they would be in Java, for example. But the specifics of implementing them for both languages are radically different. And I think it's a lot like that for Sketch. The tricks we're using for Sketch are very Go specific.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And if we wanted to build one for Ruby, we would have to build something very, very different. So yes, I consider it very much a Go product right now. And I really like that focus that that gives us. Because Go is a big enough problem on its own, let alone all of programming.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I feel I should try and explain it just because it's always worth trying to explain things, but I'm sure you all know this, which is that if you're running a service on a cloud provider that you chose years ago for very good reasons, All the cloud providers are very good at fundamental services, but they all have some subset of GPUs and they have them available in some places and not others.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I mean, that's a really good question.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah. So I very much admire VS Code. I use it, which I don't actually have to admire a program to use it. That's better than admiring is using, I think. Yeah, that's right. But actually, I do both. I both admire it and use it. But to look at the inside of VS Code, which I've been doing a bunch of recently... VS Code didn't actually solve language servers for all programming languages.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
They built JavaScript and TypeScript, JSON... and I think they maintained the C Sharp plugin. They started the Go plugin, I think, and then it got taken over by the Go team at Google, who now maintain the Go support in VS Code. I don't think the Microsoft team built the Ruby support in VS Code.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I don't know who did the Python implementation, but a lot of the machinery in VS Code is actually community-maintained for these various programming languages. And so... I'm not sure there is another option than imagining a world where each of these communities supports the tooling in some form. I don't know if each programming language needs to go out and build their own sketch.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Maybe there is some generalizable intermediate layer, some equivalent of a language server that can be written to feed underlying models. Given our... We're just starting to explore this space. Sketch is very new. We basically started it sometime near the end of November. So there's not much to it yet. Yeah.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
But so far, what we've found is it's far more than the sort of language server environment that you get with VS Code. More machinery is needed. to really give the LLM all the tooling it needs. The language server is very useful. We actually use the Go language server in Sketch. Go Please is a big part of our infrastructure. It's really wonderful software.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
But there's far more to it than that, to the point where we need to maintain an entire Linux VM to support the tooling behind feeding the model. So... what each community needs to provide. I think that's the research in progress is figuring that out.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And it's never quite what you're looking for. And if you are deciding to run your own model and do inference on it, you might find your GPU is in a region across the country or it's on a cloud provider that's different than the one you're using. or your cloud provider can do it, but it's twice the price of another one you can get.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I've heard people talk about that. A startup founder, who I won't name, mentioned that they were busy retooling their product so that the foundation models under things like v0 and Bolt would be more likely to NPM include their package to solve a problem.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I was actually really happy that they said that their plan was to make it really easy to NPM-I a package and not require a separate signup flow to actually get started. Oh, that's nice. Yeah, I thought it was wonderful. Like their solution to make their product more chat GPT-able, I guess you might say, is just make their product better. Yeah.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Which, you know, if that's... How avant-garde of them. Yeah. I'm sure one day we'll end up in the search engine optimization world of frontier models. But today... There's definitely going to be some black magic for sale. You know, here's how you really do it. Yeah, I don't see why a frontier model couldn't run an ad auction for deciding what fine tuning set to bring in.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I had, again, to talk about experiences, I was using one of the voice models and talking to it as I was walking down the street. And I asked it some question about WD-40 because I had a squeaky door. And I think I described in my question WD-40 as a lubricant. And it turns out I just didn't understand that it's not a lubricant, it's a solvent. And the purpose of it is to remove grease.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I just had your experience, but it was an LLM that told me. Oh, hilarious. And it mentioned in passing, it's like, yeah, you could also, you know, you could use WD-40 and then use a lubricant like, and then it listed some brand name. The moment I heard the brand name, I was like, oh, I see. A Frontier model could run an ad auction on fine-tuning which brand name to inject there.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And this leads to people ending up far more in sort of multi-cloud environments than they do in sort of traditional software. And so Tailscale actually is very useful there. So for users, I think it's a great fit. But what does the product actually need as like new features to support that? And the answer is, it actually is really great as it is today for that.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
And that would be a really... 100%. Yeah. It wouldn't require... I wouldn't require doing it into the pre-training months ahead of time. You could do that sort of on a, on a hour by hour basis. So that world is coming. And then once there's a world of ads, there's a world of SEO and all the rest of it.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, absolutely. Hard to tell, honestly. Kleenex is an easy one for me because we don't have Kleenex in Australia where I'm from. So I came here and everyone started calling tissues Kleenex. And it was a bit of a surprise to me.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah. Right. Exactly.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I imagine in the YouTube video of this, a little Intel Xeon banner will appear, just as you say.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, good question, especially for non-Gophers. I would suggest trying out the code completion engines because they take a little bit of getting used to, but not a lot. And depending, if you're writing the sorts of programs they're good at, they're extremely helpful. They save a lot of typing. And it turns out,
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I was surprised to learn this, but what I learned from Code Completion Engines is a lot of my programming is fundamentally typing limited. There's only so much my hands can do every day. And they're extremely helpful there. The state of Code Completion Engines is they're pretty good at all languages. with a caveat that they're probably not very good at COBOL or Fortran.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
But all the sort of general languages, especially like Ruby, I'd expect them to be decent at. I suspect the world of code completion engines will get better at specific languages as people go deeper on the technology. It's a thing I continue to work on, and so I feel confident that it can be improved.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
The other place that I think most programmers could get value today, if they're not a Go programmer, is writing small, isolated pieces of code in a chat interface. So you could try out a chat GPT or a Claude, or if you really want to have some fun, run a local model and ask it to solve problems, like try. Try Llama CPP, try Llama, try these various local products.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Grab one of the really fun models. It's especially easy to try on a Mac with a unified memory. If you're on a PC, you might have to find a model that fits in your GPU. But it's a ton of fun. And use it to say, like, write me a Ruby function that takes these parameters and produces this result. And I suspect the model will give you a pretty good result.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
So those are the places I would start because those require the least amount of learning how to hold the model correctly and you'll get the most benefit quickly.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, I think that's right. I mean, there are, we came up with some proposals, but they're not exciting. Like they're, they would be very much, we'd be doing it because, you know, corporate in an, you know, at headquarters told us to find an angle for AI or something like that. And like, you know, we, we as a startup have the option of just not doing that. And so we didn't. So, yeah.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I've seen people write guides like that. I would say the guides I've read are now out of date. Like we were saying earlier, guides go out of date. The thing I find most useful is to think of the model I'm talking to as someone who just joined the company. sometimes I think of them as an intern, though every now and again the models produce much better code than I can.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
But interns have done that too. That happens. And then as you're writing the question for it, imagine I'm talking to a smart person who knows nothing about what I'm doing and they need some background. And that gets me really far with the current frontier models. And so that would be my general piece of advice that I think applies to any programming.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I think you're, I think you might be onto something with being nice to models. Uh, I caught myself being pretty curt with models a few months back. Uh, and, uh, you know, discussing this a lot, uh, with Josh, he mentioned, uh, You know, the conclusion we came to was, you know, one of the challenges of not being nice to models is it sort of trains you to not be nice to people. Yeah.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
You're using all of the same tools. And so it might just be good for you to be nice to models.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I mean, I think you're right. It doesn't help the computer. I say please and thank you to the models now so that I remember to say please and thank you to humans. That's it. It's purely, you know, I don't want to get into the habit of... You're training yourself. Exactly. It's all about training me. That's fair.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
That's great. I didn't know we'd be talking about Tailscale at all when I came here today. So we're both basically on the same page. Yeah, there we go.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Maybe what I'll do is I'll put something new above it and I'll make it clearer. I don't want to mislead anyone. Yeah.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Fair enough. I also honestly don't check LinkedIn very often. It's not a big part of my life.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
That's totally great. I'm very happy about that.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
I mean, I love boring software. And so for me, the fact that you're having a boring experience is perfect. Yeah, man. No surprises. No surprises. Yeah. It's a product that's designed to enable you to do more things, not for you to spend your days having to configure it.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, if you come up with anything, let me know. I'm very excited about the idea of it. But software has to be, in some sense, true to itself. You have to think about its purpose when you're working on it and not step too far outside that. So I similarly wouldn't build a computer game in a tail scale. I don't think that would be a particularly good fit for a product. It's like an Easter egg.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
As an Easter egg, it would be great, actually, like a little quiz game or something built into the terminal.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Right, it can ask questions like, what is the oldest machine on your tail net or something like that. That would be a lot of fun, actually.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, I don't know either. I very much went looking for something I would use features like that for, and I didn't come up with anything. If you do come up with anything, again, I'd be very happy to hear about it.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
That's true. I think we did actually email customers once about an out-of-date version where we were concerned about security. I think that has only come up once.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Mostly, keeping Telescale up to date is sort of proactive, good security practice. It has fortunately not been a significant source of issues, in part due to careful design. A lot of engineers work very hard to make it that way.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah, it's a great team.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
So what's really interesting about Brex is that they are an extremely operational heavy company. And so for them, the quality of the internal tools is so important because you can imagine they have to deal with fraud. They have to deal with underwriting. They have to deal with so many problems, basically. They have a giant team internally, basically just using internal tools day in and day out.
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
Yeah. So that's, there's like an interesting sort of meta question there about LLMs around how many models there should be in the world from a sort of a consumer perspective, in a sense, because that's almost like, you know, you're just consuming it like where, and this sounds very similar to like the question of,
The Changelog: Software Development, Open Source
Programming with LLMs (Interview)
How does search work on websites, which you could have asked 10 years ago or 20 years ago? Do I use the Wikipedia search or do I go to Google and type in my search and maybe put the word wiki at the end to bring the Wikipedia links to the top? Both of these are valid strategies for searching Wikipedia. Yeah.
The Changelog: Software Development, Open Source
Build software that lasts! (Interview)
And when they first started, we were in the same YC batch, actually. We were both at Winter 17. And they were, yeah, I think maybe customer number five or something like that for us. I think DoorDash was a little bit before them, but they were pretty early.
The Changelog: Software Development, Open Source
Build software that lasts! (Interview)
And the problem they had was they had so many internal tools they needed to go and build, but not enough time or engineers to go build all of them. And even if they did have the timer engineers, they wanted their engineers focused on building external facing software, because that is what would drive the business forward. The Brex mobile app, for example, is awesome.
The Changelog: Software Development, Open Source
Build software that lasts! (Interview)
The Brex website, for example, is awesome. The Brex expense flow, all really, you know, really great external facing software. So they wanted their engineers focused on that as opposed to building internal CRUD UIs. And so that's why they came to us. And it was awesome. Honestly, a wonderful partnership. It has been for seven, eight years now.
The Changelog: Software Development, Open Source
Build software that lasts! (Interview)
Today, I think Brex has probably around a thousand Retool apps they use in production. I want to say every week, which is awesome. And their whole business effectively runs now on Retool. And we are so, so privileged to be a part of their journey. And to me, I think what's really cool about all this is that we've managed to allow them to move so fast.
The Changelog: Software Development, Open Source
Build software that lasts! (Interview)
So whether it's launching new product lines, whether it's responding to customers faster, whatever it is, if they need an app for that, they can get an app for it in a day, which is a lot better than, you know, in six months or a year, for example, having to schlep through spreadsheets, et cetera. So I'm really, really proud of our partnership with Brex.
The Changelog: Software Development, Open Source
Build software that lasts! (Interview)
do you do for brex how does brex leverage retool and why have they stayed with you all these years so what's really interesting about brex is that they are a extremely operational heavy company and so for them the quality of the internal tools is so important because you can imagine they have to deal with fraud they have to deal with underwriting they have to deal with so many problems basically they have a giant team internally basically just using internal tools day in and day out so they have a very high bar for internal tools
The Changelog: Software Development, Open Source
Turso is rewriting SQLite in Rust (Interview)
So what's really interesting about Brex is that they are an extremely operational heavy company. And so for them, the quality of the internal tools is so important because you can imagine they have to deal with fraud, they have to deal with underwriting, they have to deal with so many problems basically. They have a giant team internally basically just using internal tools day in and day out.
The Changelog: Software Development, Open Source
Turso is rewriting SQLite in Rust (Interview)
And so they have a very high bar for internal tools. And when they first started, we were in the same YC batch, actually. We were both at Winter 17. And they were, yeah, I think maybe customer number five or something like that for us. I think DoorDash was a little bit before them, but they were pretty early.
The Changelog: Software Development, Open Source
Turso is rewriting SQLite in Rust (Interview)
And the problem they had was they had so many internal tools they needed to go and build, but not enough time or engineers to go build all of them. And even if they did have the timer engineers, they wanted their engineers focused on building external facing software because that is what would drive the business forward. Brex mobile app, for example, is awesome.
The Changelog: Software Development, Open Source
Turso is rewriting SQLite in Rust (Interview)
The Brex website, for example, is awesome. The Brex expense flow, all really, you know, really great external facing software. So they wanted their engineers focused on that as opposed to building internal CRUD UIs. And so that's why they came to us. And it was awesome. Honestly, a wonderful partnership. It has been for seven, eight years now.
The Changelog: Software Development, Open Source
Turso is rewriting SQLite in Rust (Interview)
Today, I think Brex has probably around a thousand Retool apps they use in production. I want to say every week, which is awesome. And their whole business effectively runs now on Retool. And we are so, so privileged to be a part of their journey. And to me, I think what's really cool about all this is that we've managed to allow them to move so fast.
The Changelog: Software Development, Open Source
Turso is rewriting SQLite in Rust (Interview)
So whether it's launching new product lines, whether it's responding to customers faster, whatever it is, if they need an app for that, they can get an app for it in a day, which is a lot better than, you know, in six months or a year, for example, having to schlep through spreadsheets, etc. So I'm really, really proud of our partnership with Brex.
The Changelog: Software Development, Open Source
Over the top auth strategies (Friends)
Never invited back, Dan. Thank you very much.
The Changelog: Software Development, Open Source
Kaizen! Pipely goes BAM (Friends)
And the problem they had was they had so many internal tools they needed to go and build, but not enough time or engineers to go build all of them. And even if they did have the timer engineers, they wanted their engineers focused on building external facing software, because that is what would drive the business forward. Breck's mobile app, for example, is awesome.
The Changelog: Software Development, Open Source
Kaizen! Pipely goes BAM (Friends)
The Breck's website, for example, is awesome. The expense flow, all really, you know, really great external facing software. So they wanted their engineers focused on that as opposed to building internal CRUD UIs. And so that's why they came to us. And it was awesome. Honestly, a wonderful partnership. It has been for seven, eight years now.
The Changelog: Software Development, Open Source
Kaizen! Pipely goes BAM (Friends)
Today, I think Brex has probably around a thousand Retool apps they use in production. I want to say every week, which is awesome. And their whole business effectively runs now on Retool. And we are so, so privileged to be a part of their journey. And to me, I think what's really cool about all this is that we've managed to allow them to move so fast.
The Changelog: Software Development, Open Source
Kaizen! Pipely goes BAM (Friends)
So whether it's launching new product lines, whether it's responding to customers faster, whatever it is, if they need an app for that, they can get an app for it in a day, which is a lot better than, you know, in six months or a year, for example, having to schlep through spreadsheets, etc. So I'm really, really proud of our partnership with Brex.
The Changelog: Software Development, Open Source
Kaizen! Pipely goes BAM (Friends)
So what's really interesting about Brex is that they are an extremely operational heavy company. And so for them, the quality of the internal tools is so important because you can imagine they have to deal with fraud. They have to deal with underwriting. They have to deal with so many problems, basically. They have a giant team internally, basically just using internal tools day in and day out.
The Changelog: Software Development, Open Source
Kaizen! Pipely goes BAM (Friends)
And so they have a very high bar for internal tools. And when they first started, we were in the same YC batch, actually. We were both at Winter 17. And they were, yeah, I think maybe customer number five or something like that for us. I think DoorDash was a little bit before them, but they were pretty early.
The Changelog: Software Development, Open Source
Change my mind (Friends)
And the problem they had was they had so many internal tools they needed to go and build, but not enough time or engineers to go build all of them. And even if they did have the timer engineers, they wanted their engineers focused on building external facing software, because that is what would drive the business forward. Brex mobile app, for example, is awesome.
The Changelog: Software Development, Open Source
Change my mind (Friends)
The Brex website, for example, is awesome. The Brex expense flow, all really, you know, really great external facing software. So they wanted their engineers focused on that as opposed to building internal CRUD UIs. And so that's why they came to us and it was awesome. Honestly, a wonderful partnership, and it has been for seven, eight years now.
The Changelog: Software Development, Open Source
Change my mind (Friends)
Today, I think Brex has probably around a thousand Retool apps they use in production, I want to say every week, which is awesome. And their whole business effectively runs now on Retool, and we are so, so privileged to be a part of their journey. And to me, I think what's really cool about all this is that we've managed to allow them to move so fast
The Changelog: Software Development, Open Source
Change my mind (Friends)
So whether it's launching new product lines, whether it's responding to customers faster, whatever it is, if they need an app for that, they can get an app for it in a day, which is a lot better than six months or a year, for example, having to schlep through spreadsheets, et cetera. So I'm really, really proud of our partnership with Brex.
The Changelog: Software Development, Open Source
Change my mind (Friends)
So what's really interesting about Brex is that they are an extremely operational heavy company. And so for them, the quality of the internal tools is so important because you can imagine they have to deal with fraud, they have to deal with underwriting, they have to deal with so many problems basically. They have a giant team internally basically just using internal tools day in and day out.
The Changelog: Software Development, Open Source
Change my mind (Friends)
And so they have a very high bar for internal tools. And when they first started, we were in the same YC batch, actually. We were both at Winter 17. And they were, yeah, I think maybe customer number five or something like that for us. I think DoorDash was a little bit before them, but they were pretty early.
The Changelog: Software Development, Open Source
State of the "log" 2024 (Friends)
Yeah. So the primary reason someone uses Retool is typically they are a backend engineer who's looking to build some sort of internal tool and it involves the front end. And backend engineers typically don't care too much for the front end. They might not know React, Redux all that well. And they say, hey, I just want a simple button, simple form on top of my database or API. Why is it so hard?
The Changelog: Software Development, Open Source
State of the "log" 2024 (Friends)
And so that's kind of the core concept behind Retool is front end web development has gotten so difficult in the past 5, 10, 20 years. It's so complicated today. Put together a simple form with a submit button, have to submit to an API.
The Changelog: Software Development, Open Source
State of the "log" 2024 (Friends)
You have to worry, for example, about, oh, you know, when you press the submit button, you got to bounce it or you got to disable it when it's, you know, is fetching is true. And then when it comes back, you got to enable the button again. When there's an error, you got to display the error message. There's so much crap now with building a simple form like that. And Retool takes that all away.
The Changelog: Software Development, Open Source
State of the "log" 2024 (Friends)
And so really, I think the core reason why someone would use Retool is they just don't want to build any more internal tools. They want to save some time.
The Changelog: Software Development, Open Source
State of the "log" 2024 (Friends)
Yeah, that is exactly right. The way we think about it is we want to abstract away things that a developer should not need to focus on such that developer can focus on what is truly specific or unique to their business. And so the vision of what we want to build is something like an AWS actually. where I think AWS really fundamentally transformed the infrastructure layer.
The Changelog: Software Development, Open Source
State of the "log" 2024 (Friends)
Back in the day, developers spent all their time thinking about how do I go rack servers? How do I go manage cooling, manage power supplies? How do I upgrade my database without it going down? How do I change out the hard drive while still being online? All these problems.
The Changelog: Software Development, Open Source
State of the "log" 2024 (Friends)
And they're not problems anymore, because nowadays, when you want to upgrade your database, just go to RDS, press a few buttons. And so what AWS did to the infrastructure layer is what we want to do to the application layer specifically on the front end today.
The Changelog: Software Development, Open Source
State of the "log" 2024 (Friends)
And for me, that's pretty exciting because as a developer myself, I'm not really honestly that interested, for example, in managing infrastructure in a nuts and bolts way. I would much rather be like, hey, you know, I want S3 bucket. Boom, there's an S3 bucket. I want a database. Boom, there's a database.
The Changelog: Software Development, Open Source
State of the "log" 2024 (Friends)
And similarly, on the front end or in the application layer, there is so much crap people have to do today when it comes to building a simple CRUD application. It's like, you know, you probably have to install 10, 15, maybe even 20 different libraries. You probably don't know what most libraries do. It's really complicated to load a simple form.
The Changelog: Software Development, Open Source
State of the "log" 2024 (Friends)
You know, you're probably downloading almost like a megabyte or two of JavaScript. It's so much crap to build a simple form. And so that's kind of the idea behind Retool is could it be a lot simpler? Could we just make it so much faster? Could you go from nothing to a form on top of your database or API in two minutes? Well, we think so.