Gerhard Lazu joins us for Kaizen 16! Our Pipe Dream™️ is becoming a reality, our custom feeds are shipping, our deploys are rolling out faster & our tooling is getting `just` right.
Welcome to ChangeLog and Friends, a weekly talk show about the perfect name. Thanks to our partners at Fly.io. Over 3 million apps have launched on Fly, including ours. You can too in five minutes or less. Learn how at Fly.io. Okay, let's Kaizen.
What's up, friends? I'm here with a new friend of ours over at Assembly AI, founder and CEO Dylan Fox. Dylan, tell me about Universal One. This is the newest, most powerful speech AI model to date. You released this recently. Tell me more.
So universal one is our flagship industry leading model for speech to text and various other speech understanding tasks. So it was about a year long effort. That really is the culmination of like the years that we've spent building infrastructure and tooling at assembly to even train large-scale speech AI models.
It was trained on about 12 and a half million hours of voice data, multilingual super wide range of domains and sources of audio data. So it's super robust model.
We're seeing developers use it for extremely high accuracy, low cost, super fast speech to text and speech understanding tasks within their products, within automations, within workflows that they're building at their companies or within their products.
Very cool. So Dylan, one thing I love is this playground you have. You can go there, assemblyai.com slash playground, and you can just play around with all the things that is assembly. Is this the recommended path? Is this the try before you buy?
experience what can people do yeah so our playground is a gui experience over the api that's free you can just go to it on our website assemblyai.com playground you drop in an audio file you can talk to the play around and it's a way to in a no code environment interact with our models interact with our api to see what our models and what our api can do without having to write any code then once you see what the models can do and you're ready to start building with the api you can quickly transition to the api docs
Start writing code, start integrating our SDKs into your code to start leveraging our models and all our tech via our SDKs instead.
Okay. Constantly updated speech AI models at your fingertips. Well, at your API fingertips, that is. A good next step is to go to their playground. You can test out their models for free right there in the browser. Or you can get started with a $50 credit at assemblyai.com slash practical AI. Again, that's assemblyai.com slash practical AI.
Kaizen 16, Gerhard, what have you prepared for us this Kaizen? I think every time, I don't know what to expect. And this time, I do know what to expect. So what changed? What's new? What's fresh?
Well, I share the slideshow. Okay. I mentioned last episode, I have a slideshow with my talking points, couple of screenshots, things like that. This time I shared it ahead of time and I prepared ahead of time as well. But also I've been making small updates to the discussion, I think more regularly than I normally do. Discussion 520 on GitHub.
I mean, we always have one for every Kaizen, but this time I just, you know, went a little bit further with it. And I think it will work well. Let's see.
All right. We'll take us on this wild ride. Adam's also here. Adam. What's up?
Hey, Adam. Everything's up. Whenever someone asks me that, everything's up. That's the SRE answer. Everything's up. Everything is up. Otherwise, I'm not here. If something's down, I'm not here.
You know it's up because Gerhard's here.
Yep, so everything's up. I like that. Well, last Kaizen, we talked towards the end about the pipe dream. Oh, yeah. That was the grand finale. So maybe this time around, we start with that. We start with a pipe dream. We start with what is new. Start where we left off. Exactly. Love it. So we mentioned that, or at least you mentioned, Gerhard, that, was it Adam? Can't remember.
Anyways, we will clarify this after I mention what I have to say. Wouldn't it be nice if we had a repository for the pipe dream self-contained separate from the application? Whose idea was it?
I think it was both of ours. Adam said, can this be its own product or something? And I said, well, it could at least be its own repo, something like that.
That's right. So github.com forward slash the changelog forward slash pipe dream is a thing. It even has a first PR. that was adding dynamic backends. And we put it close to the origin, a couple of things so you can go and check it out, PR1. And what do you think about it? Is the repo what you thought it would be?
Well, for those who didn't listen to Kaizen 15, can you tell us what the pipe dream is?
Well, I think the person whose idea it was should do that. However, I can start. So the idea of the pipe stream was to try and build our own CDN, how we would do it. Single purpose, single tenant, running on fly.io. It's running Varnish Cache, the open source variant. And we just needed like the simplest CDN that we needed. which is, I think, less than 10% of what our current CDN provides.
And the rest is just most of the time in the way. And it complicates things and it makes things a bit more difficult for the simple tasks. How the idea started, I would only quote you again, Jared. Would you like me to quote you again? That was Kaizen 15.
So many quotes. Sure, let's hear it. I like hearing what I have to say.
I like the idea of having this 20 line varnish config that we deploy around the world. And it's like, look at our CDN guys. It's so simple, and it can do exactly what we want it to do and nothing more. But understand that that's a pipe dream. That's where the name came from.
Because the varnish config will be slightly longer than 20 lines, and we'd run into all sorts of issues that we end up sinking all kinds of time into. Jared Santo, March 29th, 2024. Change it on with friends, episode 38.
Okay. So there you go. What's funny is, you know how when you're shopping for a car and you look at a specific car, maybe you buy a specific car and then you see that same car and color everywhere. After this, I have realized not just hearing the word pipe dream or maybe the words, if we can debate, is it two words or one? But I actually realized I say that a lot.
I call lots of things pipe dreams and I didn't realize it until you formalized it. And now I'm like self-conscious about calling stuff pipe dreams. I think I did it on a show just the other day. I was like, dang it. Cause now it's a proper noun. And I feel like it's a reserved word, you know? It's almost a product. Yeah, it's almost a product.
If you could package up and sell 20 lines of Varnish, we would do it. But if you can't, we would at least open source it and let the world look at what we did. So it has its own repo and it has its own pull request. So, you know, it's going to be a real boy. Does it work? Does it do stuff?
I mean, I know you demoed it last time and it was doing things, but does it do more than it did before or is it the same?
Yeah, I mean, the first, the initial commit of the repo was basically I extracted what would have become a pull request to the changelog repo. That was initial commit and we ended up with 46 lines of varnish config. The pull request won, which added dynamic backends. And it does something interesting with a cache status header. We end up with 60 lines of varnish config. Why dynamic backends?
That was an important one because whenever there's a new application deployment, you can't have static backends. The IP will change. Therefore, you need to use the DNS to resolve whatever the domain is pointing to. So that's what the first pull request was. And that's what we did in the second iteration. Now, I captured what I think is a roadmap. It's in the repo.
And I was going to ask you, what do you think about the idea in terms of what's coming? So the next step would be to add the feeds backend. Why? Because feeds, we are publishing them to Cloudflare R2. So we would need to proxy to that, basically cache those. I think that would be like a good next step.
then i'm thinking we should figure out how to send the logs to honeycomb exactly the same as we currently send them so that you know same structure same dashboard same query same slos everything that we have configured in honeycomb would work exactly the same with the new logs from this new cdn Then we need to do implement the purging across all instances.
I think that's slightly harder because as we deploy the CDN in like 16 regions, 16 locations, we would need to expire, right? Like when there's an update. So that I think is slightly harder, but not crazy difficult. And then we would need to import all the current edge redirects from our current CDN into the pipe dream. And I think with that, we could try running it in production, I think.
Good roadmap. I dig it. So our logs currently go to S3, not to Honeycomb in terms of logs that we care about. And I know that I previously said we only care about our MP3 logs, not our feed logs in the sense of statistics and whatnot, but that has since changed. I am now downloading, parsing, and tracking feed requests like I am MP3 requests.
And so we would either have to pull that back out of Honeycomb, which maybe that's the answer, or somehow have it also write to where S3 is currently writing to in the current format for us to not have major rewriting on the app side. Thoughts on that?
So we can still keep S3, whatever intercepts the logs, right? Because in our current CDN, obviously the CD intercepts all the logs. And then some of those logs, they get sent to S3 indeed. But then all the logs, they get sent to Honeycomb. So you're right, I forgot about the S3 part.
So on top of sending everything to Honeycomb, we would also need to send a subset to S3 exactly as the current config. So yes, that's an extra item that's missing on that roadmap indeed.
Mm-hmm.
Do you know how you're going to implement Purge across all app instances? Like what's the strategy for that? No idea.
No idea currently. I mean, based on our architecture and what we have running so that we avoid introducing something new as a new component, a new service that does this, we could potentially do it as a job using OBAN, I think. Because at the end of the day, it's just hitting some endpoints, HTTP endpoints, and it just needs to present a key, right?
If we don't use it, anyone can expire our cache, which is a default in some CDNs. Yeah, it is. Yeah, we found that out the hard way. Exactly. So that's something that we need. I think an O-band job would make most sense. It's actually pretty straightforward.
We already have a Fastly Purge function in our app that goes and does a thing, and then we just change this to... Go and background Java reset on all these different. Now there has to be some sort of orchestration of like the instances have to be known. Maybe that's just like a call to fly or something, or I don't know how. DNS. Okay. DNS based.
Yeah. We can get that information by doing a DNS query and it tells us all instances and then we can get all the URLs.
Yeah. That sounds like a straightforward way of doing it.
Where's the, where's the data being stored?
We upload. Currently? Yeah.
In Pipedream.
Pipedream is just a cache. So you mean where's the cache data being stored?
Okay. So Pipedream is just, what exactly does Pipedream do?
So Pipedream is our own CDN, which caches requests going to backends. So imagine that there's a request that needs to hit the app and then the app needs to respond. So the first time, like let's say the home page, once the app does that, subsequent requests, they no longer need to go to the app. Pipedream can just serve because it already has that request cached.
And then because Pipedream is distributed across the whole world, it can serve from the closest location to the user. To the person. Exactly. And same would be true, for example, for feeds, even though they are stored in Cloudflare R2. The PipeDream instance now goes to Cloudflare R2, gets the feed, and then serves the feed.
Gotcha. And so Varnish is storing that cache locally on each instance. Correct. In its local disk storage, or however Varnish does what it does.
So by default, we're using memory, but using the static backend like a disk backend would be possible, yes.
I was just thinking about expiring because we just did this yesterday where we had to correct a deployed slash published episode. And we ran into a scenario where FASC was caching, obviously, because it's the CDN. And then I went into the FASC service and purged that URL. And then it wasn't doing what we expected. And I bailed on it and handed it to Jared.
And Jared checked into R2 and R2 was also caching. And so we essentially had this scenario where our application was not telling the CDN that this content is new, expire the old, purge, et cetera. And I just wonder, in most cases, aside from the application generating new feeds, which happens usually at the action of a user, so me, Jared, somebody else publishes an episode or republishes,
Couldn't the expiry command, so to speak, come from that action and inform the CDN?
Yeah, exactly. Which is how it works right now with Fastly. Like after you edit an episode, we tell Fastly to purge that episode. The problem we had yesterday is that Fastly purged it, but then Cloudflare also had a small cache on it. And so Fastly would go get the old version again and say, okay, now I'm fresh. And so we had two layers of cache that we didn't realize.
And so that's probably fixed now, but yes, it would be basically everywhere in our app that we call fastly.purge, we would just replace that with pipedream.purge or whatever, which would be an OBAN process that goes out to all the app instances.
I see. So the question was mechanically how to actually purge the cache, not so much when.
Yeah, because we already have when pretty much figured out. Gotcha. Which is pretty straightforward, really, because when we publish and we edit or delete, those are the times that you purge the cache. Otherwise... What's the point?
Yeah. Otherwise you don't do it. Please don't. It doesn't make any sense. Change hasn't happened, so don't change. Okay. How plausible is this pipe dream? Should we rename it to something else because it's not a pipe dream anymore or less of a pipe dream? Yeah. Obviously, I'm not suggesting that naturally, but like it becomes real. Does it become an oxymoron when it becomes real?
I don't know. I quite like the name, to be honest. I think it has a great story behind it, you know? So it just goes back to the origin.
And the CDN is a pipe, right? I mean, it is a pipe. Yeah, exactly.
Yeah. Yeah, I like that pipe idea. That was like one of the follow-up questions. Do we keep a space or introduce a space? Or no space? That's a really important decision. Space or no space? What about a tab? Should we put a tab in there? We can.
Camel case, no space, space...
What do listeners think? I mean, you've been hearing this story for a while and you've heard us think. I think we should have a poll. And that's how, you know, I know that's how we end with names like Boaty McBoatface. We're very aware of that. This is not that. We're just asking, like, how do we, what would be the way to spell it that would make most sense?
Pipe dream one word, pipe space dream, pipe tap dream. I'm not sure about that. I think we can do one like us for fun or camel case indeed.
I'm leaning towards one word. The Merriam-Webster dictionary and the Cambridge dictionary both say that it's two words.
I'm seeing it two words everywhere. Yeah. Except for old English.
Yeah.
Where is pip dream? All one word.
I'm leaning towards one word though just like okay just pipe dream one word okay and I'm leaning in the other direction so we need a poll great well the repo name is already like lowercase pipe dream no spaces no nothing no no dashes nothing like that so you know I think it would make sense so yeah all right we'll run a poll See what people think. See what people want.
Give the people what they want. Correct. And when it comes to, when we do switch it into production, whenever that happens, I think we could maybe discuss again, whether we rename it, when it stops being a pipe dream for real. For now, it's still like a repo. It's still a config. It runs. I mean, if you go to pipedream.changelog.com, it does its thing.
But it's not fully hooked up with everything else that we need. I have a new name.
Pipe reality.
Pipe reality.
Just let it marinate. Not now. Not yet. Pipe media?
Pipe media? I don't know. Pipelog? Pipelog. Oh, here's a better one. Change pipe. Pipely. Pipely. That one really hurts. I think that's the winner. I think that's the winner. Quick, buy the debate before someone else buys it. Pipe.ly. Oh, yes. That one's almost too good. Almost.
Yeah. Is this really where we're marching towards? I know this began as literally a pipe dream and it's becoming more real. You've had some sessions. You've, according to the, maybe I'm jumping the gun a little bit on your presentation here, but you've, you've podcasted about this slash live demo this and, We've been talking about the name. We've been talking about the roadmap.
Is this really a true possibility to do this successfully?
Well, based on the journey so far, I would say yes. I mean, it would definitely put us in control of the CDN too. A CDN is really important for us. So it's even more important than a database because we're not heavy database users and we'll get to that in this episode, I'm sure. So a CDN really is the bread and butter. Now, we need something really simple.
We need something that we understand inside out. We need something that I would say is part of our DNA because we're tech focused and we have some great partnerships and we've been on this journey for a while. You know, it's not something that one day we woke up and we said, let's do this. So this has been in the making for a while. We were almost forced. In a way, yes.
I would say encouraged, you know, in a way, like we're pushed in this direction. There are other options. Yeah. But I think there is like this like natural progression towards this. And it doesn't mean that we'll see it all the way through. But I would say that we are well on our way to the point that I can almost see the finish line. I mean, even the roadmap, right?
Putting the roadmap down on paper, it made me realize actually the steps aren't that big and we could take them comfortably between Kaizens. And I don't want to say by Christmas, but wouldn't it be a nice gift, a Christmas gift? What do you think?
I mean, I think that's a bold roadmap. Let me add this to the roadmap or maybe I'm not seeing it in the repo and it's there. Test harness. Is there a test harness?
No, there isn't a test harness now.
I would love to be able to develop against this with confidence, especially once we start adding those edge redirects and different things. I would love to have that as part of the roadmap so that I can fire it up and create an issue.
I would love that. Okay. Yeah, go for it. Cool. Open source for the win. Cool.
So I'm going to open source the issue and then you open source the code. Amazing. I love that. Just making sure you didn't say PR is welcome and you're moving on. Cool.
Yeah. Can we revisit the idea of this being a product? Single tenant, single purpose, simple seems like a replicated problem set.
Honestly, I think so. Honestly, I can definitely see this being part of Flutter.io.
Well, there's this name for which we cannot name in regards to flying. It's more of a class of people, I would say, is probably that. I'll be even more vague. Sorry, listeners. That's so vague that I don't even know what you're talking about. There is some information. I'm not sure how much we can share. But then there's like Tigris that has led the way in a lot of ways.
And I just talked to OVACE because, by the way, they may even be sponsoring this episode. Fly is not only a partner, but also a sponsor of our content. And I had a conversation with OVACE, who is one of the co-founders of Tigris.
And he shared with me that if it weren't for Fly, it would have taken them years to build out all of the literal machines across the world with the NVMe drives necessary to be as fast, to be what Tigris has promised. And I don't want to spoil it for everybody, but Tigris basically is...
an up-and-coming s3 and because of the way that fly networks and because the way that fly handles machines across the world and the entire platform that fly is very developer focused tigris was able i think within nine months to stand up tigris And so you can deploy Tigris via a single command in the Fly CLI, and then you can also have all of your billing handled inside there. This is not an ad.
I'm just describing it. But when I said that back in the day, I was thinking about Tigris because I had first learned about them and knew about this story, and I knew they were built on Fly. I knew their story was only possible because of what Fly has done. And I think that this pipe dream is realized or capable of being realized because of fly being what fly is.
And I feel like we have this simple nature, sort of the, I said really simple CDN, but I'm not tied to that because RSS is, you know, kind of one of the really simple part of it. But I think that's kind of what it is. It's like, I feel like other people will have this and it can certainly live in this world of fly. Yeah.
I don't know. There's a possibility there. I think we build it for ourselves and then we'll know more.
Are you thinking make it private? The repo? It's still not too late.
Are you going to rug pull these people before there's a rug down?
Well, yeah, no one's using it, so... Yeah, private rug. It has 60 lines of varnish. I think we're getting ahead of ourselves, right? I think so. But once we start adding the test harness, once we start adding the purging, which, by the way, is specific to our app, but maybe...
that would need to be generic by the way so if we this was to be a product we would need to have a generic way of purging doesn't matter what your app is so there's a couple of things that we need to implement to make this as a product and in that case it would be in this repo i think but um it could also be like a hosted service like tigris is maybe especially if you get the cool domain Why not?
I can see that. And this can be our playground, like the pipe dream can be our playground. But then the real thing with all the bells and whistles could be private.
Yeah, I think we build Pipe Dream in the open, and then if we decide that there's a possibility there, then you genericize it in a separate effort.
The one thing which I do want to mention is that there's a few people that helped contribute. So I would like to, this is also time for shout outs. Of course. To Matt Johnson, one of our listeners. Shout out to Matt. And also James A. Rosen, he was there from the beginning. The first recording that we did, that's already live. The second one as well that we recorded, I haven't published it yet.
I still have to edit it. But that was like basically the second pull request that we got together. And even though a bunch of work, you know, went obviously in the background before we got together, when we did get together, it was basically putting all the pieces, you know, so we did like in this very open source group spirit. And yeah, so there's that.
So I think keeping that true to open source would be important. And if not, then we would need to make the decision soon enough so we know which direction to take. But you're right, rug pulls, not a fan at all. We should never do that. And even the fact that we're discussing so openly about this, I welcome that. I think it's amazing, this transparency.
So that we're always straight from the beginning what we're thinking, so that no one feels that they were misled in any way. Agreed. Agreed. I like it. Well, the last thing that I would like to mention on this topic before I'll be ready to move on is that we live stream the CDN journey, a change log with Peter Bandugo. There'll be a link in the show notes.
We got together and we talked about where we started, you know, how we got to the idea of the pipe dream and where we think of going. So if you haven't watched that yet, it'd be worth. there was a slideshow. Not as good as the last one, the last Kaizen, but it was, I'm happy with it. Let me put it that way. Awesome. Cool. We'll link that up.
Okay, friends, here are the top 10 launches from Supabase's launch week number 12. Read all the details about this launch at supabase.com slash launch week. Okay, here we go. Number 10, Snaplet is now open source. The company Snaplet is shutting down, but their source code is open.
They're releasing three tools under the MIT license for copying data, seeding databases, and taking database snapshots. Number nine, you can use PG Replicate to copy data, full table copies, and CDC from Postgres to any other data system. Today it supports BigQuery, DuckDB, and MotherDuck with more syncs to be added in the future.
Number eight, Vect2PG, a new CLI utility for migrating data for vector databases to SuperBase or any Postgres instance with PG Vector. You could use it today with Pinecone and QDrant. More will be added in the future. Number seven, the official Supabase extension for VS Code and GitHub Copilot is here. And it's here to make your development with Supabase and VS Code even more delightful.
Number six, official Python support is here. As Supabase has grown, the AI and ML community have just blown up Supabase. And many of these folks are Pythonistas. So Python support expands. Number five, they released log drains so you can export logs generated by your super-based products to external destinations like Datadog or custom endpoints.
Number four, authorization for real-time broadcast and presence is now public beta. You can now convert a real-time channel into an authorized channel using RLS policies in two steps. Number three, bring your own Auth0, Cognito, or Firebase.
This is actually a few different announcements, support for third-party auth providers, phone-based multi-factor authentication, that's SMS and WhatsApp, and new auth hooks for SMS and email. Number two, build Postgres wrappers with Wasm. They released support for Wasm, WebAssembly, Foreign Data Wrapper. With this feature, anyone can create an FDW and share it with the Supabase community.
You can build Postgres interfaces to anything on the internet. And number one, Postgres.new. Yes, Postgres.new is an in-browser Postgres with an AI interface. With Postgres.new, you can instantly spin up an unlimited number of Postgres databases that run directly in your browser and soon deploy them to S3. Okay, one more thing. There is now an entire book written about Supabase.
David Lorenz spent a year working on this book, and it's awesome. Level up your Supabase skills and support David and purchase the book. Links are in the show notes. That's it. Superbase launch week number 12 was massive. So much to cover. I hope you enjoyed it.
Go to superbase.com slash launch week to get all the details on this launch or go to superbase.com slash changelogpod for one month of Superbase Pro for free. That's S-U-P-A-B-A-S-E dot com slash changelogpod. What's next?
Custom feeds. That's one of your topics, Jared. Custom feeds. So tell me about it. I don't know what it is. I know what it is, but I don't know what exactly about custom feeds you want to dig into.
So custom feeds is a feature of changelog.com that we wanted to build for a long time. Probably not quite as long as we waited on chapters, but we've been waiting mostly because I had a a false assumption or maybe a more complicated idea in mind. We wanted to allow our Plus Plus members to build their own feeds for a long time.
The main reason we want to allow this is because we advertise ChangeDog Plus Plus as being better. Don't we Adam? Yeah, it is better. It's supposed to be better.
However, people that sign up and maybe only listen to one or two of our shows, whereas they previously would subscribe publicly to JS party for instance, and maybe ship it, they now have to get the plus plus feed, which was because of supercast all of our episodes in one ad free master feed.
And so for some people, that was a downgrade because they're like, wait a second, I want the Plus Plus versions, but I also don't want all your other shows, to which we were quite offended, but we understand. And that's been the number one request. I would call it a complaint, but actually our supporters have been very gracious with us.
They ask for it, but they say it's not a big deal, but it would be nice. In fact, some people sign up for Plus Plus and continue to consume the public feeds because that's what they want to do. But we wanted to provide a solution for that for a very long time. And because it was plus plus only, I had it in terms of like blockers.
I had this big blocker in front of it, which was we need to get off Supercast first. Because that's the reason why it's a problem is because Supercast works this way, which is our membership system that's built all for podcasters. And it's served us very well, but it has some technical limitations such as this one. So moving off Supercast is a big lift.
And not one that I have made the jump yet because there's just other things to do and it works pretty well and lots of reasons. And so I didn't do custom feeds for a long time thinking, well, we got to get off of Supercast first. And then one day it hit me. Why? Why do we have to get off of Supercast? Can't we limp into this somehow?
Can't we just find out a way of doing it without getting off of Supercast? And the answer is actually pretty simple. It's like, well, all we need to know is are you a Plus Plus member or not locally to our system, which lives in Supercast? And then I remembered, well, Supercast is just using Stripe on the back end, and it's our Stripe account. And that's awesome, by the way.
They give us direct access to our people and no lock-in and stuff. And so kudos to them for that. And so I was like, no, all we actually have to know is, do you have a membership? And all the membership data is over in Stripe. And so it's simply a Stripe integration away from having membership information.
here in changelog.com so i built that worked just fine and then i realized okay now i can just build custom feeds and just allow it to people who are members and so we build out custom feeds and it's pretty cool have you used them gearheart have you built a custom feed no i still consume the master feed the master plus plus feed with everything master plus plus feed yeah okay that's fair but do you know what i would love to do
To build one now. Oh, you would? Yeah. Live on the air. Let's see what happens if we do that. So changelog.com. How do I do that? Like run me through that, Jared. I sign in.
Are you a plus plus member? Of course you are because you have the plus plus feed.
Yeah.
Okay. So sign in changelog.com. Yep. And go to your home directory, the tilde. Yes, I'm there. And there you should see a section that says custom feeds. I do see it. Okay. Click on that sucker.
get started. Okay. New feed.
All right. There you go. Add a feed. Now you're going to give it a name that's required, you know, call it Gerhard's feed.
Yes, sure.
Gerhard's feed. You can write your own tagline and that'll show up in your podcast app. Okay. You can be like, it's better.
Hang on. I'm still a tagline. Jared made me do this. Okay. Okay. Moving on.
Then you get to pick your own cover art because, hey, you may be making a single show feed. Maybe you're making all the shows. You can pick the plus plus one. You can pick a single show. Pick your cover art or you can upload your own file. You get to pick a title format. So this is how the actual episode titles come in to your podcast app.
Maybe you want to say like the podcast name, colon, the title of the episode. Maybe you just want episode titles, you know, put a format in there. And then you can limit your feed to start on a specific date. Some people want like fresh cuts between their, like the old days and the new days. And so they want to start it on this date because it doesn't mess up their marked as red or whatever.
September 13th, start today. It'll start today. It's going to be empty. And then pick the podcast you want.
okay so hang on i used oh i see okay okay i see i see so upload cover art that's the thing which was messing with me because i wanted to add mine but then it said or use hours and when you say or use hours i'm basically changing the cover art which i uploaded
with one of yours interesting right ours as in a changelog cover art that previously exists got it so you can like use js parties or upload your own file and you'll have your own cover art for your own feed
Okay, so I've made a few changes. First of all, the name is Gerhard and Friends. Okay. The description is Kaizen 16, this episode.
Okay.
The cover art, I uploaded one, but then I changed the changelog and friends. Okay. Starts today, 13th of September. Yes. Title format, I will leave it empty. And for the podcast, I'll choose changelog and friends. Okay. Yes. And this feed should contain ChangeLog++ at free extended audio. Yes.
Bam.
And automatically add new podcasts with launch. I'm going to deselect that because I only want to change with my friends. Save.
Perfect. Boom, it's there. There you go. You build a custom feed. You can grab that URL, pop it into your podcast app, subscribe to it. Got it.
And I found the first bug. No, you didn't. So the bug is, if I upload my cover art and then I select another cover art from one of yours, it uses my cover art, but not in the admin. In the admin, it shows me that it's using yours, but when I create the feed, it's using my cover art. Okay, so you did both.
I did both, yes. And then submitted the form? Correct, yes. Okay, you are the first person that's done that, I think.
Of course. Okay.
People usually pick one or the other. So, okay. Open an issue. I will get that fixed.
I will. Let me take a screenshot so that I remember. Boom. There. Awesome. Cool. Looks great. Actually, hang on. The picture which I want for this cover art is us three recording right now. So if Adam looks up, I'll take a screenshot. There you go. That will be my cover art. Okay. Got it. So good.
Too good. So custom. So cool. You know, one thing I was going to do, which I haven't done yet, and this is a reminder is I want to put the change log legacy cover art in the list. Don't you think so, Adam? Like you can have the old change log legacy logo if you want.
That would be cool, actually. Yeah. Super dope.
Actually, that's an idea we had is to expand these to a bunch of maybe have custom artists come in and create new cover art you can select from. That might be cool. Very cool. But yeah, it's been kind of a screaming success, honestly. Currently, we have 320 Changelog++ members, and those 320 people have created 144 custom feeds so far.
including mine including yours I see yours right there amazing and the cover is your face correct yes cool so cool so that's the feature that's amazing it worked very well I have to say I just still have to load it in my podcast player but once I do that amazing well let's stop there then because that's where I'm at and that's where I'm stuck Jared you're also stuck yes so Gerhard's next step is to do what I've done and I think he may have the same outcome I don't know
My outcome was I loaded the URL into my clipboard on my iPhone, opened up Overcast, add podcast via URL, did that, clicked add URL, and it says not a valid URL. Does yours have a start date? No. Okay. I don't think so.
So yours has a bunch. The URL only contains feeds. It's forward slash feeds, forward slash Asha. So it doesn't have the full.
Oh, I might've changed. I might've screwed that up yesterday when I was fixing something else. When I was giving you your account.
This has been weeks for me. I just haven't reported it to you yet.
And you've been waiting for this to do it live? Why would you wait this long?
Are you waiting for this? Yes. Public embarrassment. Okay. No, just the fact that I just haven't done it yet. I'm sorry.
Okay, now that's all right. I think that that copy URL button should have copied the entire URL. Did it just give you the path, Gerhard? It did, yes. No wonder it's not a valid feed. So I literally fussed with that yesterday because I was giving Adam a different copy paste button and I might have broken it yesterday.
Now, interestingly, if I hover over it, I can see the correct link.
Yeah.
But when I click on it, I only get the path.
Yeah, the href is correct, but the data-copy value is incorrect. And I'm pretty sure I broke that yesterday. So that used to work because all these other people are happy, but you're sad because I broke it yesterday.
So I have a quick fix. You right-click the get URL, and you say copy URL rather than relying on the click action. And then you get the proper URL. Try that, Adam. Let's see if that works.
Let's see here. Copy link. Did it solve my problem? Let me enter it. Boom, goes the dynamite. It's at least not yelling at me.
It is taking its time, though. Well, the other reason why that was happening probably a few weeks ago is because... If you loaded a feed that has all of our episodes, for instance, 1,000 plus 12 megabyte XML file, we would serve it slow enough that Overcast would time out and it wouldn't think it was a valid feed. But then I fixed that by pushing everything through the CDN.
Because at first, when I first rolled it out, it was just loading directly off the app servers. I know it's just a little bit too slow for Overcast.
Right. Okay, next question then. This is a UX question. I am not a plus plus subscriber, but I can click the option and I assume it does nothing to say this feed should contain plus plus ad free extended audio. I haven't clicked play because I just literally loaded it for the first time now, but I'm assuming that I won't have plus plus content because I'm not a plus plus subscriber. Is that true?
No, I do have plus plus content.
I'm thinking you are an admin and so it doesn't matter.
Okay, gotcha. So does this check then only show up for people who can check it?
The entire UI for building custom feeds only shows up if you are an active plus plus member or an admin, which is literally the three of us.
Okay, that makes more sense then.
Like you can't even build custom feeds. Now I did consider custom feeds for all, you know, let the people have the custom feeds, but plus plus people obviously would only get, be the only ones who get the checkbox. That's something that I'd be open to if lots of people want it. But for now I was like, well, let's let our plus plus people be special for a while.
Is there a cost center? with these custom feeds? Like, is there an additive to the cost if we were having to deal with costs? Marginal.
Okay. Every custom feed has to be updated every time an episode's updated. And so if we had 100,000 of them, there would be some processing and maybe hit some R2, too many put actions versus, you know, it's free egress, but it's not free all operations. And so there's like class A operations, class B operations.
And the more you edit those files and change them, I think eventually those operations add up to costing you money, but it's marginal on the margins. If it got to be a huge feature where, I mean, if we had a hundred thousand people doing custom feeds, we'd find a way of paying for that. You know? Yeah. That's a different problem. But yeah, it's a marginal cost, but not, not worth considering.
Gotcha.
Okay. So the copy can be updated pretty easily. It's probably a fix going on already for that because it's so simple. For the ships, it'll be out there. Good. Well, because I mean, I was like, well, how do I get this URL to my iPhone? I guess I can like copy it and like airdrop it to my iPhone. Maybe it'll open up in the browser.
And I was like, well, let me just go on the web and, you know, get URL essentially.
Yes, our user experience assumes that our users are nerds. And so far, before I broke that copy button yesterday, there's been zero people are like, now how do I get this into my podcast app? Like no one's asked us that because all of our plus plus members completely understand how to copy and get into their whatever, you know, they are smarter than me, most of them.
Now, if it was for a broader audience and this was a baking show and we're going to provide custom feeds for bakers or aspiring bakers, then I probably would have to add more of a handholding. And Supercast actually does a really good job of handholding you into your private feed because it's not a straightforward mental process for most people, just for nerds.
Yeah, I agree. It kind of requires some workaround. There's really nothing you can do about that, right? I mean, you're adding literally a custom feed via URL that no index knows about. So it's obvious you have to do some sort of workaround to get there, to get your feed into your... Yeah, I mean, a better UX would be...
After the custom feed's created, we send you an email. That email contains a bunch of buttons. Each button's like add to Overcast, add to Pocket Cast, add to Apple Podcasts, and to paint on. I like that idea a lot. That's how Supercast works.
Yeah, I like that idea a lot. Email them every time it changes that they go upon creation and now that is immutable until, well, theoretically mutable until they edit it again and then it's muted. You know? So it's in stone. Yeah, it's mutated.
We could certainly add a button that says email this to me. you know, next to the get URL, maybe like email me the URL. It's a good idea. And that's like a fast way to get it into your phone without having to do phone copy paste or airdrop like Gerhard did.
Yeah. Cause you don't know about the email.
So that's a good feature even for nerds. Cause it's just easier that way.
Well, that would have solved the problem of me having to get the data onto my iPhone.
Totally. Which my email is. Exactly. I think that we should add that as a feature. It's a good idea.
Yeah.
Hey, Jared here in post. That email it to me feature just shipped today. And that copy paste bug fixed. Kaizen.
Custom feeds are here, y'all. If you're a Plus Plus subscriber, by the way, changelog.com slash plus plus. It's better.
If you are not a Plus Plus subscriber and you desperately want this feature, let us know. Because, you know, squeaky wheels and oil. Must be in Zulip. I don't know.
That's the other catch, right?
Anyways. Well, not even, Gerard's not even in Zulip yet, so let's not get ahead of ourselves. No, but what's the URL? Because I would like to join. changelog.zulipchat.com.
Okay.
But can you just get on from there? I don't know. It's new to us.
Zulipchat.com. I'm doing it now. Let's see. Log in. Okay. Log in with Google. Go. There you go. Yes, continue. Okay, sign up. You need an invitation to join this organization.
All right, go to our Slack, go to main, scroll up a little bit. You'll see there's an invite link. To get into Zulip, you have to go to Slack. It's a Trojan horse. That's how you do it. That's right. You install one through the other.
Listeners, you could do this too. You can follow these same instructions. It is in Maine. I think it's Friday, September 6th. Okay. Jared posted it as a reply to after that conversation. Now we're trying out Zulip in earnest. And there's a link that says join Zulip here. And it's a long link that I could read on the air, but no one would ever hand type that in. I agree.
You can put it in the show notes though. So it might be there. So there you go. Yeah. We've shared our thoughts already elsewhere on friends with this, but you know, I'll be, I'd be curious. We'll be so many Kaizen's away. Well, at least one more Kaizen away multiple months before we get Gerhards.
By the next Kaizen, we may be like transitioned over to Zulip. We might be self hosting it, but I don't think we should do that.
No way. There's a Kaizen channel. This makes me so happy.
And it's for all ideas about making things better and stuff. I even put one in there. You can read it.
Oh, wow. Okay. I'm definitely going to check this out. This is nice. This is a very nice surprise. It was worth joining just for this.
Oh, wow. So cool. This is nice. Isn't that cool? I thought a Kaizen channel would be on point. So cool. So I was kind of thinking like, well, how do we replicate our dev channel over here? And it's like, well, dev is just one thing. Like let's have a Kaizen and then different topics can be based on.
Big thumbs up.
Yeah.
Big thumbs up. So amazing.
All right. Awesome. Custom feeds, Zulip chat, Kaizenine. What's next on the docket?
Well, I'm going to talk about one very quick improvement, actually two, which I've noticed. The news. Yes. I love the latest. Oh, you like that? That graphic is so cool. I really like the small tweaks. Also, the separators, the dividers between the various news items. They just stand out more. I really, really like it. And it feels like more... The play button is amazing, by the way. I love it.
I can definitely see it. I made the play button stand out. Yeah. It feels so polished. Thank you. It really does. But the latest is so amazing. And the news archive, it's there. And it works.
Yes, it is. Amazing. I appreciate your enthusiasm. To tell everybody what the latest is, I literally put an arrow. And the words of the latest on our homepage that points to the issue, because it's kind of it could be discombobulating. Like you look at it on a desktop, at least on mobile, it goes vertical. But like on the left is kind of the information about news and the signups and stuff.
And on the right is the latest issue. But you may not know, like, what am I looking at when you land on the page? What's the thing on the right-hand side? And so I just put this little arrow, handcrafted with SVG, by the way. And the words, the latest, like someone just scratched them on the page that points to that issue. So it's just kind of...
giving you a little bit of context and Gerhard loves it.
So I appreciate it. It gives us another dimension. It's playful. It's, you know, like there is some fun to be had here. It's not just all serious. It's not like another news channel, but it's really, really nice. Like the whole thing, it feels so much more polished compared to last time. I can definitely see like the tiny, tiny improvements. Yeah. Very cool. So much Kaizen. Indeed. Cool.
Well, the next big item on my list is to talk about twice, 2x faster time to deploy. This is something we just spent a bit of time on. I was surprised, by the way, of the latest deploy. It was slower than 2x, but we can get there. Okay. The first thing which I would like to ask is how do you feel about our application deploys in general? Like, Does it feel slow? Does it feel fast?
Does it feel okay? Do you feel surprised by something? How do application deploys when you push a change to a repo feel to you? Historically or after this change?
Historically. Historically, I would say too slow. Too slow, okay.
Adam?
Yeah, historically too slow.
Okay, okay. So what would make them not too slow? Is there like a duration? Maybe like a 2x. 2x, okay. That's so leading though. I literally meant like how many minutes or seconds, I think we talked about that. Would it feel that it's good enough?
There's like this threshold that I'm not sure exactly. It's probably fuzzy, but it's the point where like you're waiting so long that you forget that you're waiting and you go do something else. And I think that's measured in single digit minutes, but not necessarily seconds. Like I can wait 60 seconds. Well, that's my seconds. I can wait one minute.
and maybe I'm just hanging out in chat waiting for that thing to show me that it's live yeah but as soon as it's longer than that I'm thinking well I should come back in five then I forget what I was doing I don't come back and I've lost flow basically so I would say around a minute you know 30 seconds would be spectacular it doesn't have to be instant but I think
two, three, four, five minutes, it's going to be where you're like, yeah, it's kind of like friction to deploy because you deploy and you're like, now I got to wait five or 10 minutes.
Okay.
That's my very fuzzy answer.
Okay. That's a good one. So what used to happen before this change, we used to run a dagger engine on the fly so that it would cache previous operations. Okay. so that subsequent runs will be much quicker, especially when nothing changes or very little changes.
The problem with that approach was that from GitHub Actions, you had to open a WireGuard tunnel into FLY so that you'd have that connectivity to the engine. And what would happen quite often is that tunnel, for whatever reason, would maybe be established, but you couldn't connect to the instance correctly, and you would only find that out a minute or two within the run.
And then what used to happen, you would fall back to GitHub, which is much slower because there's no caching, there's no previous state, and the runners themselves, because they're free, they are slower. Two CPUs and seven gig, which means that you have to, when you have to recompile the application from scratch, it can easily take seven, eight, 10 minutes.
And that's what would lead to those really slow deploys. So what we did between the Kaizens, since the last Kaizen, Let me see which pull request was that. It was pull request 522. So you can go and check it out to see what that looks like.
So when everything would work perfectly, when the operations would be cached, you could get a new deploy within four minutes, between four and five minutes thereabouts. And with this change, what I was aiming for is to do two minutes or less.
And when I captured, when I ran this, like the initial tests and so on and so forth, we could see that while the first deploy would be slightly slower, because, you know, there was nothing, subsequent deploys would take about two minutes. Two minutes and 15 seconds, the one which I have right here, which is a screenshot on that pull request 522. So how did we accomplish this?
We're using namespace.so, which they provide faster GitHub actions runners, basically faster builds. And we run the engine there. And when... a run starts, we basically restore everything from cache, the namespace cache, which is much, much faster. And we can see up there, basically, per run, we can see how much CPU is being used. We can see how much memory.
Again, these are all screenshots on that pull request. And while the first run, obviously, you use quite a bit of CPU because you have to compile all the Elixir into bytecode and all of that, subsequent runs are much, much quicker. And the other thing which I did, I split the, let's see, is it here? It's not actually here. We need to go to Honeycomb to see that.
So I'm going to Honeycomb to look at that. I've split the build time, basically the build, test, and publish from the deploy time because something really interesting is happening there. So let's take, for example, before this change, let's take Dagger on Fly, one of the blue ones, and have a look at the trace. So we have this previous run which actually took 4 minutes and 21 seconds.
and all of it is like all together it took basically three minutes there's like some time to start the engine to start the machine whatever whatever all in all four minutes and 20 seconds so a newer run for example this one which was fairly fast it was two minutes and a half if we look at the trace we can see that diagram namespace the build test and publish was 54 seconds
So in 54 seconds, we went from just getting the code to getting the final artifact, which is a container image that we ship into production. In this case, we basically publish it to GHCR.io. And then the deploy starts. And the deploy took one minute and 16 seconds. So we can see that, you know, like with this split is very clear where the time is spent.
And while the build time and the publish time is fairly fast, I mean, less than a minute in this case, the deploy takes a while because we do blue-green, new machines are being promoted, the application has to start, it has to do the health checks. So there's quite a few things which happen behind the scenes that, you know, if you look at it as like one unit, it's difficult to understand.
So this was ideal case. This is what I thought would happen. Of course, the last deploys, if I'm just going to filter these dagger on namespace. By the way, we are in Honeycomb. We send all the traces and all the build traces from GitHub Actions to Honeycomb. And you can see how we do that integration in our repo. You can see that we had this one, 2.77 minutes, which is roughly 2.40.
But the next one was surprising, which took nearly five minutes. And if I look at this trace, this was, again, nothing changed. Stuff had to be recompiled. But in this case, the build, test, and publish took nearly three minutes, which this tells me there is some variability into the various runs when it builds it. I don't know why this happens, but I would like to follow up on that.
As a TLDR, this change meant that we have less moving parts. And when namespace works, and this is something, again, that we need to understand, why did this run take longer? It should take within two minutes. We should be out. A change should be out in production. Half the time is spent in build, and half the time is spent on deploys.
So when it comes to optimizing something, now we get to choose which side do we optimize. And I think build, test, and publish is fairly fast. The slower part is the actual deployment. So how can we maybe half that? How can we get those changes once they're finished and everything is bundled? How could we get it out quicker?
i love it i think do you have ideas on that well i think the application boot time could be improved right because it takes a while for the app to boot when i say it takes a while it may take 20 30 seconds for it to be healthy all the connections to be established now i'm not sure exactly which parts of those you know would be the easiest one to optimize
But I think the application going from the deploy starting and the deploy finishing, taking a minute and a half is a bit long. So I'll need to dig deeper. Like, is it when it comes to connecting to the database? Is it just the application itself being healthy? Like which part needs to be optimized? But again, we're talking, this is like a minute and a half.
We're optimizing a minute and a half just to put this into perspective. And that's why I started with the question, like how fast is fast enough?
Yeah. And I think if you're at 90 seconds, you're probably right about there. I would still go in and spend an hour thinking like, is there a low hanging fruit that we haven't looked at yet that we could, you know, squeeze 10 more seconds off. And then I would stop squeezing the radish after that. You know, I see. That'd be my take on it, Adam.
Well, the flow, it seems, is every time new code is pushed to our primary branch on the repository, a new deploy is queued up. And this process happens for each new commit to the primary branch. A new application is spun up, it's promoted, so if I deploy slash push new code, and then a minute later Jared does the same thing... My push does this process. My application is promoted.
Jared's commit does the same thing. His application is then promoted. And that's via networking. And then these old machines are just, you know, like thrown off and then the new machines are promoted and they just fall by the wayside. Correct. Which totally makes sense. I think you have things happening that we want to happen.
I agree with you on the low hanging fruit, but on the app boot process, we've got even things like 1Password being those things being injected from their CLI. I'd imagine that API call is not strenuous, but it's probably seconds, right? Yeah.
So there's probably in each thing we're booting up as part of the app boot process for every commit, there's at least one to several seconds per thing we're instantiating upon boot. Well, that's just me hypothesizing how things work.
No, that's a good one. That's exactly what, you know, we're like trying to hash it out so that we share the understanding that each of us holds. so that we can, you know, talk about like what would, because we talked about this in the past and I really liked Jared's question.
He was asking, we're talking about like Kaizen Inc and, you know, we're talking about all this change, but are we actually improving? And that's why when I tried to think about this and I was thinking about like, okay, what would the improvement look like? And can we, I mean, we can measure it and we can check, have we delivered on this?
And until like the last deploy that went out, I was fairly happy with the time that the duration that these deploys were taking. But based on the one which I have right in front of me, the build going from one minute and a bit to almost three minutes, I think that variability is something that I would like to understand first before optimizing the boot time.
Is it the CPUs then that's impacting it, you think? Like the CPUs and the horsepower behind the build test?
Well, let's open up namespace. Let's go to instances. We can see the last build, which you can see here, like all the builds. This is inside Dagger, is that right? This is namespace. All this is namespace, by the way. So we're using namespace for the runners. And I would like... This is a third-party service? It is a third-party service, yes. That you just found or someone told you about?
Exactly, yes. I am paying attention to various build services and depot.dev. I love it. Namespace.so. Namespace.so, yes. Our trial is almost over. Exactly, yes. Now, how much will it cost us, by the way? Every minute. Three days left on your trial. Three days left on our trial, yes. I'm getting nervous here. So hang on. Per minute, we're paying $0.0015. $0.0015.
which means that for 40 minutes, like, okay, for an hour, we're paying less than 10 cents for an hour of build time. So, you know, pay as you go. It's really not expensive. So it's okay if I have to put my card because we're talking cents per month for our builds. That makes sense. What does a single build cost us then? So when it's five minutes, let's see, I'll do the math now really quick.
Hang on. Thank you. Hang on. It's less than a cent. A build which takes five minutes is less than a cent. Is that right? Yeah, less than a cent, like 75. What is less than a cent? Zero cents. No, there was like another unit in the past. I forget what it's called. Whatever, like it's- Satoshi? No, that's a different- The 70, 75% of a cent. Okay.
Okay, so it's like definitely- So reasonable, that's reasonable. Yeah, very, very reasonable, I would say.
If we get it down faster, it's even less.
Exactly, so- What exactly does namespace do though? I mean, are they just, is it just a machine that has proprietary code on it that we send something to to do a build process?
So namespace, basically it runs custom GitHub actions. much quicker because they have better hardware, better networking than GitHub Actions themselves. So you can literally use namespace to replace GitHub Actions.
So they're just like, they just use the Actions API, but you're running on their infra. Exactly.
smart or you can use like faster docker builds you know but they also have preview environments which i haven't tried in code sandboxes that's something sponsor i that's what i'm thinking because i have a shout out here and hang on let me just get the name straight to be clear they are not a sponsor but we're saying they should i think they should be i think uh hugo i just know his first name and i'm trying to find um because our credit card is expiring
Right, we need those six cents, don't we? We need those six cents. That's Gerhard's credit card for now. Exactly, you can use mine, it's okay. Hugo Santos. No relation, no relation. Yeah, no relation, no relation, but I think if there is someone that you should talk at Namespace, I think it would be him.
As I was setting all this stuff up, he was very responsive even on the weekend to emails, and I think he's one of the founders, by the way, so I thought that was a very nice touch. And he really helped like go through all like the various questions which I had and the various like, does this look right? So even like he even looked at the pull request to see how we implemented it.
And all in all, like the promise is there. We can see that it does work well when it works like two minutes, we get those two minutes, but sometimes it takes more. And then the question is, well, why does it take more? So that's something which I'm going to follow up on.
Mm-hmm.
Cool. Cool. Well, I'm excited for the follow-up and for this progress. Indeed. Cool.
Well, our friends over at Speakeasy have the complete platform for API developer experience. They can generate SDKs, Terraform providers, API testing, docs, and more. And they just released a new version of their Python SDK generation that's optimized for anyone building an AI API.
Every Python SDK comes with Pydantic models for request and response objects and HTTPX client for async and synchronous method calls and support for server sent events as well. Speakeasy is everything you need to give your Python users an amazing experience integrating with your API. Learn more at speakeasy.com slash Python. Again, speakeasy.com slash Python.
And I'm also here with Todd Kaufman, CEO of Test Double, testdouble.com. You may know Test Double from our good friend, Justin Searles. So Todd, on your homepage, I see an awesome quote from Eileen. You could tell she says, quote, hot take. Just have Test Double build all your stuff.
End quote. We did not pay Eileen for that quote, to be clear, but we do very much appreciate her sharing it. Yeah, we had the great fortune to work with Eileen and Aaron Patterson on the upgrade of GitHub's Ruby Rails framework. And that's a relatively complex problem. It's a very large system. There's a lot of engineers actively working on it.
at the same time that we were performing that upgrade.
So being able to collaborate with them, achieve the outcome of getting them upgraded to the latest and greatest Ruby on Rails that has all of the security patches and everything that you would expect of the more modern versions of the framework, while still not holding their business back from delivering features, we felt was a pretty significant accomplishment.
And it's great to work with someone like Eileen and Aaron Because we obviously learned a lot. We were able to collaborate effectively with them. But to hear that they were delighted by the outcome as well is very humbling for sure.
Take me one layer deeper on this engagement. How many folks did you apply to this engagement? What was the objective? What did you do, etc. ?
Yeah, I think we had between two and four people at any phase of the engagement. So we tend to run with relatively small teams. We do believe smaller teams tend to be more efficient and more productive. So wherever possible, we try to get by with as few people as we can. With this project, we were working directly with members from GitHub as well.
So there were full-time staff on GitHub who were collaborating with us day in, day out on the project. This was a fairly clear set of expectations. We wanted to get to Rails, I believe 5.2 at the time and Ruby 2.5. Don't hold me to those numbers, but we had clear expectations at the outset.
So from there, it was just a matter of figuring out the process that we were going to pursue to get these upgrades done without having a sizable impact on their team.
A lot of the consultants on the project had some experience doing Rails upgrades, maybe not at that scale at that point, but it was really exciting because we were able to kind of develop a process that we think is very consistent in allowing Rails upgrades to be done. without like providing a lot of risk to the client.
So there's not a fear that, hey, we've missed something or, you know, this thing's going to fall over under scale.
We do it very incrementally so that the team can, as like I said, keep working on feature delivery without being impacted, but also so that we are very certain that we've covered all the bases and really got the system to a state where it's functionally equivalent to the last version, just on a newer version of Rails and Ruby.
Very cool, Todd. I love it. Find out more about Test Double's software investment problem solvers at testdouble.com. That's testdouble.com, T-E-S-T-D-O-U-B-L-E.com.
So I would like to switch gears to one of Adam's questions. And he was asking if Neon is working for us as expected and the state of Neon. So is Neon working for us as expected? Based on everything I've seen, it is. Like I was looking at, for example, the metrics. I was looking at how it behaves in the Neon console. This is us for the last 14 days.
So what we see here in the Neon console, we see our main database. we can see that we have been using 0.04% of a CPU. So really not CPU, but in terms of memory, we have eight allocated. We're using 1.3 gigabytes. 1.3 gigabytes used out of eight allocated. So we are over-allocating both CPU and memory. So fairly little load, I would say, and things are just humming along. So no issues whatsoever.
Do we need to push this harder somehow? Like, do we need to get the vector search in our database or something? Weren't you going to set us up an AI agent, Gerhard?
Yes, I was. I didn't get to that, but that would not use this database, by the way. That would be something different now.
PG vector, man. PG vector. Get it in there.
Right. I would, but not in this production database. So this is special, right? I mean, this is exactly what we want to see. If anything, we can, because we have the minimum compute setting set to two CPUs and eight gigs of memory. And I know that Neon does an excellent job of auto-scaling us when we need to. We didn't need to get auto-scaled because we are below the minimum threshold.
So we could maybe even lower the threshold and it would still be fine.
So we're not, we're not using this to its fullest extent is my point. No.
So we need some arbitrary workloads in order to push it. Well, to see where it breaks, we wouldn't need it to break. I think if anything, one thing that I would like us to do more is use Neon for development databases. And I have something there I haven't finished, but I would like to move on to that as well, if everyone's fine.
Adam, further thoughts or questions around Neon? This was your, this was your baby.
I think the question I have is, you know, while the thresholds are low and we're below our overallocation, you know, what should we expect? And this is good news. This is good news that we're not.
Yeah, I'm just saying, like, it's hard for us to use it and see if it's good or bad because we're not heavy database users. And I was just saying we just need some more arbitrary workloads to actually flex this thing. But I was mostly just being facetious.
Gotcha. Yeah. I'm in the dashboard too, and I'm looking at a different section of that same monitoring section, which is like rows. I believe rows being added, which is kind of cool because over time you can kind of see your database updates essentially deleted, updated, inserted. So there's definitely obviously activity. We're aware of that.
I think the other things that we should pay attention to in terms of is it working for us as expected is, I would say some of that is potentially on you, Jared, and you too, Gerhard, is that we've got the idea of branching. Gerhard, I know that you're familiar with it because you demonstrated some of this last time we talked, but being able to integrate some of those futuristic, let's just say,
features into a database platform. This is managed. It's serverless. We don't have to manage it. We get a great dashboard. We get the opportunity for branches. Have you been using branches, Jared? Do you need to use branches? Does that workflow not matter to you? I think that's the DX and the performance is the two things I think I care about.
I have a custom branch which I use to not develop against, but to sync from. I guess it's not mine, it's that dev 2024. That's the one I use. Maybe Gerhard created that, but that's the one that I do use. I pull from that, so I'm not pulling from our main branch because there's just less load on our main branch to do that. I'm using that, but I synchronize it locally, manually.
and then develop against my own Postgres because I have a local Postgres. The one thing about it is because it's a neon branch, I will have to go in here and synchronize it back up to the main and then pull from it. And I'm sure that's automatable, but I just haven't done that. I've been waiting for Gerhard's all-in-one solution.
Yes, that's coming. That's my next topic to share. What exactly is that? Well, that would mean tooling that's local to make all of this easy. Jared wouldn't need to go to the UI to click any buttons to do anything. He would just run a command locally and the thing would happen locally. He wouldn't need to even open this UI. Shouldn't that be a Neon native thing though? It is.
It does have a CLI, but the problem is you need to, first of all, install the CLI, configure the CLI, like add quite a few flags, connect the CLI to your local Postgres, like all that glue. That's the stuff which I've been working on. And I can talk about that a bit more.
And so the idea would be to just automate some of that, not have to go through all the steps. Still do the CLI installation like any normal user would. Correct. But maybe a neon setup script that probably populates a file with credentials or something.
Some command that you run locally that knows what the sequence of the steps is and what to do. For example, maybe you don't have the CLI installed. Well, install the CLI. you need to have some secrets. Well, here's the one password CLI. And by the way, the secrets is here, like in this vault. So stuff like that, like all that. Yeah.
Speaking of one password, did you notice their new SDKs? Is that, did you pay attention to their new SDKs they deployed? TypeScript, Go, a couple others for native integrations. Obviously we're Elixir, so it doesn't really matter to us, but maybe in some of the Go pipelining, I know you've probably done. Would it make sense to skip OP and go straight to Go with the SDK? Yeah.
Because OP is their CLI, right? It's same. It's not an SDK. The SDK lets you native integrate into the language.
So it's possible to use something else. But at the end of the day, it's like the integration needs to work. And the implementation, whether you use the SDK or whether you use the CLI, is just an implementation. Just doesn't matter, yeah. What we care about is like, is it reliable, our implementation? Do we have any issues with it? So far, no. Yeah. Are we using like service accounts?
And that's something that we've been waiting because without service accounts, you would need to set up a connect server, which I didn't want to do. So that was a big deal for us. Whether you use the CLI or the SDK, we could, but it wouldn't make that much of a difference. Now, if the application itself, while it runs, it was doing certain things,
Maybe that's interesting, maybe we could change some of the boot phase so that we wouldn't inject the secrets from outside the application and the application itself could get them directly. But I really want to get Elixir releases going. And once we have those, things change a little bit.
But it's all just like maybe shuffling some code from here to here, but ultimately it will still behave the same, you know, just like you would maybe bring it into the language. So I haven't seen their latest SDKs, but I would like to check them out. That's a good one for me to look into. Okay.
So the tooling that Jared was mentioning to make things simpler for him, I've been thinking about it from a couple of perspectives. And I realized that to do this right, it will be slightly harder. And the reason why it's slightly harder is because I would like to challenge a status quo. The status quo is you need a dagger for all of this. Maybe you don't, right?
So I'm trying a different approach. And the mindset that I have for this is Ken Beck, September, 2012. For each desired change, make the change easy. Warning, this may be hard. Then make the easy change. So what I've done for this Kaizen, I made that change easy, which was hard, so that I could make the easy change. How hard was it? So that's what happened. Well, let's have a look at it.
So we're looking now at pull request 521. And 521 introduces some new tooling. But I promise it's just a CLI. And what's special about it is that everything runs locally. There's no containers. There's no Docker. There's no Dagger. Everything is local. And I can see Jared's eyebrows go up a bit because that's exactly what he wanted all this time.
So what pull request five to one introduces is just. which is a command runner. It's written in Rust, but it's just a CLI. And if you were, for example, Jared, or even Adam could try this, if you were to run just, in our repository at the top level, you would see, Just calls them recipes, what is possible. And the one which I think the audience will appreciate is Just Contribute.
So remember how we had like this manual step, like install Postgres, you know, get Erlang, get Elixir, get this, get that. I mean, that's still valid, right? You can still use that manual approach. Or if you run Just Contribute, it will do all those things for you running local commands. It still uses Homebrew, it still uses ASDF, but everything that runs, it runs it locally.
And the reason why this is cool is because, I mean, your local machine, whatever you have running, it remains king. There's no containers. Again, I keep mentioning this because that adds an extra layer. And what that means, stuff like, for example, importing a database in a local PostgreSQL is simpler because that's what you already have running.
Resolving the Neon CLI, again, it's just like a brew install. It's there and you wire things together. You don't have to deal with networking between containers. You don't have to pass context inside of containers, which can be tricky, especially when it comes to sockets, especially when it comes to special files. So I'm wondering, how will this work out in practice?
And the thing which I didn't have time to do, I didn't have time to implement just db-prod-import, which would be the only command that you'd run to connect to Neon, pull down whatever needs to pull down, maybe install the CLI if it doesn't have it, and then just in your local Postgres, import the latest copy. Same thing for just dbfork, which would be an equivalent of what we had before.
The difference is that was all using Dagger and containers and, you know, it was, I mean, have you used it, Jared, apart from when we've done it?
Mm-mm.
There you go. Adam, have you ever run Dagger? Never. In the three years that we've had it? Never. Not one time. There you go. How many times did you have to install things locally for you to be able to develop changelog in the last three years?
Well, that's where my, that's where my personal angst relies. It just, it lives right there in that question. How many times, what is the pain level it's high for me?
So Adam might be more excited about this than I am. Pull request 5 to 1. I mean, even you, Jared, if you want to try it. I mean, if you do dry run, it has a dry run option, by the way. It won't apply anything, but it will show you all the commands that would run if you were to run them yourself, for example. And there may be quite a lot of stuff, right, when you look at it that way.
But it's a good way to understand, like, if you were to do this locally, and if you were to configure all these things, what would it do without actually doing it? So I tried it on a brand new Mac, and I think that's the recording that I have on that pull request.
I might need to get a brand new Mac so I can try this. Look at that.
That's very, very... I've been waiting for a good reason to upgrade, you know? There you go. And honestly, within five minutes, depending on your internet connection, everything should be set up. Everything is local, the Postgres, everything.
What we don't yet have, and I think this is where we're working towards, is how do we, first of all, cleanse the data so that contributors can load a type of data locally. But I think that's like a follow-up. First of all, we want Jared to be able to do this with a single command, refresh his local data. And after I have this, the bulk of the work done, this step is really simple. How simple?
Maybe half an hour at most. That's what I'm thinking. So not much.
So it should be done before the day's over. Yeah, it should be done. Exactly. It should be done. One thing I'm noticing is that you're switching back to brew install Postgres.
I'm just curious about that change. so i mentioned it in one of the comments when i committed basically when i was installing it via asdf the problem with was with icu4c i just couldn't compile postgres from asdf correctly and since then in homebrew we can now install postgres at 16. so you can specify which major version which was not possible i think two years ago when i did this initially
so there is that now let's see let's see where this goes i'm excited about this if anyone tries it let us know how it goes for you uh if you want to contribute a changelog like how far does it get and by the way i tested this on linux as well the easiest way there's like a something hidden there in the just it's called actions runner what it does is exactly what you think it does it runs
a GitHub actions runner locally, for this you need Docker by the way, and it loads all the context of the repository inside of that runner. So that's the beginning of what it would take to reuse this in the context of GitHub actions. And what I'm wondering is, will it be faster than if we use Dagger? That's me challenging the status quo. The answer is either, yes, it is.
And maybe we should do that instead. It will shave off more time or no, it's not. And then I get to finally upgrade Dagger because we're on a really old version. So you still work at Dagger, right? I do. Yes, very much so. Yes. Okay.
I just want to know how much you want to challenge this status quo.
no no no that hasn't changed I'm just kidding cool so for our listener if you want to try this github.com slash the changelog slash changelog.com clone the repo brew install just just contribute that's it try those three steps if you're on macOS if you're on Linux it's not brew install just it's apt get install or yum install the installations are there yeah and just contribute
And what should we expect to see when we type in just contribute? Is instructions or a set?
No, no, we do actually run the commands. It's going to do it for you, man.
If you do just dash n. Now, what if you have an existing repo like Adam does? Can he do it and it should pick up where he, yeah.
Yeah.
Give that a shot there, Adam. I'm so scared.
What you could do is, if you want, maybe start a new user. It shouldn't mess anything up, to be honest. It just installs. Maybe it does things differently or does things twice. I don't really know. But it should be safe.
I like this. I mean, I did run just in our repository. You get contribute, deps, dev, install. These are all the actions or recipes. Correct. Install, Postgres down, Postgres up, tests. And each of those have a little hashtag next to it, which is a comment, essentially, of what the recipe does.
So over time we can expect to see more of these just recipes if this pans out to be, you know, long term. These recipes will potentially get more and these will be a reliable way to do things within the repository.
And it's all local. That's the big difference. Because before, I mean, even now, right, because we still kept Dagger, we still have everything that we had so far, that would always run in containers, which means it wouldn't change anything locally. And in some cases, that's exactly what you want, especially when you want to reduce the parity between test and production or staging and production.
But in this case, it's local, right? So you want something to happen locally. And local is not Linux. It means it's a Mac. So then you have that thing to deal with. in which case brew helps, and ASDF helps, and a couple of tools help. But you still have to know what are the commands that you have to run, in what order, what needs to be present, when. And this basically captures all those commands.
It's a little bit like make, which we had, and we removed. But this is a modern, I would say, version of that. Much simpler, much more streamlined, and a huge community around it. I was surprised to see how many people use Just. By the way, huge shout out to Casey, the author of Just. I really like what he did with the tool, like 20,000 stars on GitHub.
A lot of releases, 114 fresh releases, 170 contributors. Yeah, it's a big ecosystem, I have to say. Mm-hmm.
Without, one more question on this, without me having to read the docs, thank you, if you can help me on this, can I do just dash n install, so I can just see what it might, I'm just using the word just so many times, can I just see what it might do?
Exactly. Okay. And dash n, it basically stands for dry run.
Right.
The reason why you have to do it before the recipe is because some recipes can have arguments, and if they don't, like if you do the dash n at the end, it won't work. So it has to be the command just, the flags, and then the recipe or recipes because you can run multiple at once.
Very cool.
But yes.
I assume that because like any good hacker that writes a CLI that's worth his weight in gold would always include a dash in, right? A dry run, yeah. Good job. What was his name? The maintainer?
Casey. Let me see if I can pronounce his surname. He's Casey on GitHub, by the way. Rodarmore? The blue planet, apparently. Casey Rodarmore. You can correct us, Casey.
Shout out to Casey.
Yeah.
C-A-S-E-Y. GitHub.com slash C-A-S-E-Y. GitHub.com slash... I'm just kidding. I was going to say it one more time. Thanks, Casey. Are we stuck in a loop? Rod or more? Rod armor.
Rod armor. Rod armor. Rod armor. Yes, I like that. That's how we're pronouncing it.
Casey, Rod armor. Correct us if that's correct or correct us if it's not correct or don't correct us, but go to getup.com slash Casey, C-A-S-E-Y. Just do it. Just do it. Just do it. That's a good one. I like it. That's cool, man. Thank you for doing that.
Not a problem. I enjoyed it. It was fun. Okay. Homelab production. Homelab to production. So next week on Wednesday, it's TalosCon and I'm calling it Justin's conference. It's the Garrison Con. The Garrison Con, exactly. I'll finally meet Justin in person. I'm giving a talk. It's called Home Lab to Production. It's, I think it's 5 p.m. So one of the last ones. We'll have a lot of fun.
I'm bringing my home lab to this conference. So we will have fun.
I almost commented on that. It's not quite a home lab. It's more of a mobile lab.
It is a mobile app, but I will have a router with me. So it will be both the actual device and the router. And yeah, we'll have some fun. Now you're bringing two of them with you or just one? The device, the Homelab plus the router. So two devices. Okay. I want two of everything. Yes. Well, we are going into production.
So we're going to take all the workloads from the Homelab and we're going to ship them into production. During the talk, we're going to see how they work. We're going to use Talos, Kubernetes, Dagger is going to be there. So yeah, we'll have some fun. So this is a live demo then, basically. It's a live, yes.
Well, it's recorded because, you know, I want to make sure that things will work, but I will have the devices there with me. You never know what Wi-Fi is like. And that's the one thing which I don't want to risk.
Yeah, you can never.
Even like 4G, 5G, even mobile networks are sometimes unreliable. But I'm looking forward to that. So that's like, and it will be a recorded talk as well. So yeah.
Well, that's good because TalosCon is on-prem free and co-located with SRE Day. However, it's also over with. By the time this ships, it'll be two days in the past. And so happy to hear, Gerhard, that there'll be a video because certainly our listener will want to see what you're up to and it's in the past.
tense so there you go and guess what what I'm going to be recording myself as well okay what are you holding up there I'm holding a Rode Pro. Do you know the Rode Pros? Like the mini recording microphones?
Yeah. You can like clip them to your shirt. Something like that.
Exactly. So I have two of those. Boom. And two cameras. I'll take them with me. They're 361. So I'll be recording like the whole talk and then editing and publishing it. So that's the plan. Cool. So whatever the conference does, great. But I also want to do my own.
Yeah.
So that's the plan. Full control.
Indeed. Awesome. Well, great conversation. Good progress. This session, what do you call it? This Kaizen. This Kaizen, yes.
What do we want to accomplish for the next one? Are we on the right trajectory? Like in terms of the things that we talked about, in terms of what we think is coming next, did we miss anything? It'll be Christmas or just before Christmas.
I think the Just stuff with the database and branching with Jared being able to pull that down to be a small but big win. Okay. I think, you know, continued progress, obviously, on the Pipe Dream. Pipely.tech.
Pipely.tech. I like it. Did you buy the domain?
No, but it's available.
Not available?
It is available for $10.
Pipely.tech. I don't know. I think we've got to get Pipe.ly. Otherwise, we're just posers.
But I like pipe.ly.tech as well.
So we might have to raise some money for this if we're going to have to buy pipe.ly. We might need 50 grand. The future's coming, and we're going there. Kaizen. Kaizen. Bye, friends. What do you think about our pipe dream? Should we turn it into a pipe reality? A pipe-ly, if you will? Let us know in Zulip. Yes, we are hanging out in Zulip now. It's so cool how we have it set up.
Each podcast gets a channel and each episode becomes a topic. This is great because you no longer have to guess where to share your thoughts about a show. Even if you listen to an episode way later than everybody else, just find its topic and strike the conversation back up. There's a link in our show notes to join Changelog's Zulip. What are you waiting for? An engraved invitation?
Hey, it's still September, which means we're still trading free Changelog sticker packs for thoughtful five-star reviews and blog posts about our pods. Just send proof of your review to stickers at changelog.com along with your mailing address and we'll ship the goods directly to your mailbox anywhere in the world. Let's do this.
Thanks once again to our partners at Fly.io, to our beat-freaking residents, The Goat, BMC, and to our longtime sponsors at Sentry. Use code CHANGELOG when you sign up for the team plan and save yourself $100. That's almost four months free. Next week on The Changelog, news on Monday, Ryan Dahl talking Dino 2 on Wednesday, and a fresh episode of Changelog and Friends on Friday.
Have a great weekend. Leave us five-star reviews if you want some stickers, and let's talk again real soon.