Abi Noda, co-founder and CEO at DX, joins the show to talk through data shared from the Stack Overflow 2024 Developer Survey, why devs are really unhappy, and what they're doing at DX to help orgs and teams to understand the metrics behind their developer's happiness and productivity.
welcome to changelog and friends our weekly talk show about happy devs just a little happy dev a little happy dev over there big thank you to our friends and our partners at fly.io that is the public cloud that helps productive developers ship learn more at fly.io okay let's get happy Hey, friends. I'm here with Dave Rosenthal, CTO of Sentry.
So, Dave, I know lots of developers know about Sentry, know about the platform because, hey, we use Sentry and we love Sentry. And I know tracing is one of the next big frontiers for Sentry. Why add tracing to the platform? Why tracing and why now?
When we first launched the ability to collect tracing data, we were really emphasizing the performance aspect of that, the kind of application performance monitoring aspect, you know, because you have these things that are spans that measure how long something takes.
And so the natural thing is to try to graph their durations and think about their durations and, you know, warn somebody if the durations are getting too long. But what we've realized is that the performance stuff ends up being just a bunch of gauges to look at. And it's not super actionable.
Sentry is all about this notion of debug ability and actually making it easier to fix the problem, not just sort of giving you more gauges. A lot of what we're trying to do now is focus a little bit less on the sort of just the performance monitoring side of things and turn tracing into a tool that actually aids the debug ability of problems.
I love it. Okay, so they mean it when they say code breaks. Fix it faster with Sentry. More than 100,000 growing teams use Sentry to find problems fast, and you can too. Learn more at Sentry.io. That's S-E-N-T-R-Y.io. And use our code CHANGELOG. Get $100 off the team plan. That's almost four months free for you to try out Sentry. Once again, Sentry.io. Well, developers are unhappy.
That's the sentiment, right? That is the sentiment. Why? Are you happy, Jared? You're a developer, right? Are you in the 80% rule or are you in the 20% rule? It depends on the minute of the particular day.
Okay. Whether or not I'm happy or unhappy. It's a fleeting thing, happiness. Am I satisfied in my work? Yes. Do I always think that? No. Am I a typical developer? Probably not anymore. We've been podcasters now for a long time. And so I don't hold a nine to five software job, which is probably the people mostly who are being interviewed or surveyed. I wasn't in that survey.
So my sentiment was not in there. No, I did not take the Stack Overflow. We have Abhi Noda here with us from DX. Abhi, did you take the Stack Overflow survey?
I did not, but I've definitely been looking at the results.
Interesting results. 80% is a large number. I mean, that's an overwhelming number, and it's not a small survey. Pareto's principle says 80-20.
80-20. That's the big principle. Right. Apparently it's true in regards to developers' happiness.
Or lack thereof, I guess. That would be the 20. The happy would be 20, and the unhappy would be 80-20. What's interesting about this, so this came out, as we said, from the 2024 Stack Overflow Survey results synthesized by ShiftMag.
So shout out to Anastasia Uspensky at ShiftMag for really highlighting this particular point and pulling together a few other data points to try to figure out, she was trying to figure out why, why are they synthesized? And so you might think, well, it's the AI. The AI is taking away our joy. That doesn't seem like that's the case. At least that's what her conclusion is. It's not the AI.
The AI is making us slightly more productive and maybe a little bit more apprehensive about the future. But currently, I think developers who are in their seats... writing code know that at least today they aren't being replaced in large swaths by AI. So if you're a good software developer today, you're not too worried about that, at least not in the present.
And it's not the stuff they're working on necessarily, but it is other things. Other things like tech debt and complexity. And so that kind of comes out in all kinds of different ways. But that was her finding. Abhi, do you have, you run a survey company, right? You guys create surveys for folks.
He said it, Abhi. He called you a survey company. Sorry, is that reductive? That's a jab. I'm just kidding. I don't know.
I don't mean to reduce. I don't mean to reduce.
I'm just throwing some jokes out there. It's friends. You got to do it, you know? No, it's fair. It's fair.
I don't mean to reduce, but you all help people do surveys. Yeah. Just curious your thoughts as we kick into this topic.
We help people do surveys and collect other types of data on their developers. Just to clarify.
Good job, Bobby.
Yeah. The survey is really interesting. One of the first things that came to mind when I read that headline, that 80% of developers being unhappy, was something we see across organizations we work with. Something a little bit similar, we track something around that we call attrition risk. So what is the likelihood of a developer actually leaving a company in the next 12 months?
And that number typically hovers around 10 to 15%. Okay. And so one of the first things that came to mind, what are the implications of 80% of developers being happy, right? If only 15% of them are actually going to leave the company, right? And
That amounts to a lot of unhappy employees who are not doing their best work, who are probably not clocking in the 40, 50 hours that we're hoping for, who may be phoning it in a little bit. So that was interesting to me, just reading that headline.
Right. I always go bigger when I see something like a industry like software development and I start thinking, and we don't have answers necessarily, but I start wondering like, well, how many of workers are happy? You know, just in general, like is 80% like ridiculously large. It is an absolute terms, right? It's four out of five. That's a large percentage.
But if we compared it to some other industry, right. You know, medical workers, teachers, plumbers, pick your industry. Would they be at like maybe 75% or 85% or are they down there in the 40s and 50s? And we're way out of line. That's the question that I usually ask and I don't have the answer ever. So I kind of just twiddle my thumbs and move on.
I was thinking that too, the macro versus the micro. What's the devs versus the world aspect of this? Because I would imagine that medical workers, as an assumption, are generally, especially since the pandemic, are higher to be unhappy for obvious reasons. A lot of pressure put on them, a lot of change. I think a lot of bureaucracy, a lot of things in that system. Plumbers, I'm not so sure.
Plumbers, if you're an indie plumber, you're probably pretty happy. Plumbers make pretty good money.
Yeah.
And they generally call their own shots. Kind of hard to replace. Call them in a pinch. It's like, hey, listen, I got water on my floor, man. You got to come help me out here. Right. And they jump on it. And they're like, hey, 500 bucks. Thank you very much. Right. All you did was turn the nut. Come on now.
Yeah. It's a relatively stable industry. I mean, you're always going to have people with plumbing, new plumbing, plumbing problems, et cetera. So it's not as much affected by perhaps the Federal Reserve like we are. The medical industry went through a huge swell, of course, during COVID, where there was just so many needs for medical workers that their salaries went through the roof.
They were in huge demand. Of course, they worked ridiculously long and
trying hours and so that was probably not producing happiness but the pay was really really good and now coming down on the other side of it it's similar to the software world where it's like demand is waning jobs are harder to find you may go unemployed for a while and so there's probably a similar chart if you were to chart overall demand
Teachers, though, is a good one to cross-examine on that. Teachers are never happy, are they? I mean, they're so under-resourced. They're struggling. I just feel like most teachers probably would love to be happier.
I don't know. What's funny, though, is I wonder how you can Venn diagram happiness with job happiness. Because I meet a lot of teachers that are very happy, very joyful, very purposeful, serving, loving people, and happy in life. But then you say, are you happy with your job?
And I wonder now if we zoom out to this happiness, unhappiness level with devs even, because some of the findings said that code was not what made developers unhappy, because most of them are doing things on the side, either through learning or for career development, things like that. I just wonder how much is it job unhappiness? Is it unhappiness generally?
Because a lot of people, especially in the United States, that's where my lens is. That's where I live is generally, generally unhappy, like with a lot of things. So does that like spill over the trickle over?
Yeah. I think it does. This particular question was specific to like, are you happy with your job? And so that is the context that we're talking about. But of course, nobody just draws a wall up around their job. And like, as they walk through the door to work, all of a sudden they're like this different feeling person. These things do affect each other.
It's interesting. Yeah, the question was, how satisfied are you in your current professional developer role?
Mm-hmm.
And the options were not happy at work, complacent at work, and happy at work. So actually, of that 80% who are reported to be unhappy, 47% are complacent. So they didn't say they were unhappy. They said, meh.
They said, meh. So what this number is, is you take the happy people, and it's 20%. And then there's two categories that make up the 80%. And a large part of that is not like, I hate my job. They're just like, you know, it's a job, which isn't all that bad. I mean, a job is a job because it's work.
I mean, it's not, I know we have a culture and of course the desire to like follow your passion and do what you love and all of these things. But that's the few and the proud usually who can actually do that. You know, it's not very many of us.
can do what we love all the time and it feels like i would do this if i wasn't getting paid for it like that's not the normal and so just being kind of meh with your job is it could be worse right maybe worse what i think is really interesting is the why right so so why are developers satisfied or unsatisfied in their in their jobs and i think
The first images that pop into our minds might be pay or managers or layoffs or AI. But if I'm reading this correctly, the top contributors to satisfaction are actually the developer experience or technical data, right? The tooling, the complexity of the systems and the code base. Am I reading that correctly? Is that how you guys read it?
That's how I read it as well. Yes. Technical debt and complexity are the two driving factors to this unhappiness, which effectively is developer experience. I mean, it's your work. And how did we get there? I think it's just like two decades of move fast and break things, isn't it? I mean, isn't that just kind of how we've gotten here? That's my best guess.
Maybe. Yeah. Two decades of move fast, break things, hire a lot of people, churn a lot of people. Churn. Reorg many times. And now everything's a mess.
Right. So in this tracking that you do with regard to attrition, 15 to 25 percent. Is that what you said? Yeah. In the next 12 months, likely to move somewhere else. Do you also get qualitative information about like why? Like why are they moving on? Is it similar things?
Yeah. Yeah. So we we're focused on measuring the developer experience. A lot of the things listed here, you know, difficulty of understanding code or developer environments, CICD. strategy on the team, a lot of these things are aspects of the developer experience we measure for lots of different companies. And then we correlate the two.
So we correlate these different aspects or facets of developer experience against who's at risk of leaving and who's actually left. And our data actually aligns quite a bit with what I'm looking at here with the SEC overflow report. Yeah, the difficulty of doing work as a developer seems to be the preeminent cause of regrettable attrition for companies.
Not pay, not liking your manager, not stock compensation. It often is just the difficulty of actually doing work.
Right, which can manifest in technical issues, but also bureaucratic issues.
Makes it harder to be productive.
Right. You're feeling like you're not getting anything done or you're, you're constantly like working Jira tickets and you come in in the morning and you got 20 open tickets and you work eight hours and you sweat and you bleed and you leave and you got 22 open tickets and you're like, I'm never going to, I'm never going to get myself up from here into a place of progress.
You just feel like maintenance, maintenance is all it is. And I can see how that would be demoralizing, especially over time.
Yeah, when it's not getting better. I mean, it's demoralizing for developers. It's also demoralizing for leaders I talk to who run, are getting this type of data at their companies and quarter after quarter, despite making efforts to make improvements around this stuff, the data keeps coming back that things are slow. People are frustrated. It's hard to get work done.
So I guess the question is, is how do you solve that problem? Is it the organization's problem? Is it the leadership's problem? Is it the product's problem? Is it the market's problem? Because I think a lot of that complexity comes from the fact that solving software problems are hard generally. Being blocked is very common. Having to help others level up or answer questions is very common.
And that's going to be pretty much a thing. I guess potentially if AI starts to solve some of this for us, that gets to be reduced some, this blockage, so to speak, this blockage. Spending time looking for answers, spending time answering answers or repeating answers for people. This, the blockage that comes from the lack of awareness of where to go next and be productive.
At GitHub, I think I told this story to you before, Adam. Tell it again. At GitHub, we had a lot of these problems. Developer tooling, getting releases out, the builds, developer environments. People were leaving, and they were telling leaders that they were leaving because it was hard to get things done at GitHub. This is back in 2020, if I'm getting my years right. Well, hindsight's 2020.
Mm-hmm. And what we ended up doing, we froze features for a quarter. All of GitHub engineering, no features. Whole quarter spent fixing these problems. It was dramatic, right? And things got a lot better as a result. Yeah. Another example is actually Atlassian. Their CTO is very public about how they're focusing on developer productivity, developer experience.
And at Atlassian, not only do they have a pretty substantial portion of their engineering organization that is devoted to this type of stuff, But they give all product engineering teams at Atlassian 10% of their time to be spent fixing things, as they call it, fixing things that suck, right? That get in the way of...
So, but to answer your question, Adam, what do we do about it and what's preventing us from doing things about it? I think it actually boils down to the fact that you see a survey like this from Stack Overflow, right? People are unhappy, you know, it's because of technical debt and the developer experience issues.
But to actually do something about it as a business, you have to be able to calculate that the cost of doing something about it is outweighed by the return on investment you're going to get after you do something about it. And I think that's a really hard problem right now. No one knows how to actually quantify this thing.
set of things the developer experience right and it's something that you can take to the cfo or ceo and say hey like we we're slow because of x y and z and if we fix x y and z we'll be this much faster and it'll be worth it that's that's the hard problem so no one can make the case for doing something about a lot of this stuff because you can't talk about it in terms of dollars
Yeah, I think this speaks to our technical debt metaphor and some of the argumentation we've had on this show with friends about is that a good metaphor or not? Because you can't really quantify it like you can actual debt. You can take your debt and your debt service principle and interest. You can take your interest rate.
And you can put that on a chart and you can extrapolate it and say, look, if we don't pay this debt down now, maybe not if you're the United States government, but if you're like an actual business, you can say, if we don't pay this debt down, we're going to go bankrupt in 90 days. And that convinces leadership to be like, okay, it's worth it.
But when it comes to technical debt, we lack that quantitative ability to extrapolate forward and say, we're going this slow right now. If we don't dramatically change things, start paying this down. we're going to grind to a halt in 90 days.
What has been the happiness level of past surveys? Can we just use Stack Overflow surveys as an example? Because that's what we're lensing off of anyways, if that's a correct adjective or verb. Has the unhappiness changed dramatically from 40, 50 to now 80%? Has it always been 80%? Is that maybe a good baseline? Like mostly people are unhappy.
That's a good thing because the reason why I ask that question is because innovation comes from angst. Unhappiness is a version of angst, right? And so you can only innovate and change if you have angst as opposed to some degree.
I mean, it's one place. Like greed also drives innovation, right? Like I want to make money. Of course. I need to invent something to make more money.
But the greed may be causing the angst that the developers dare. So, I mean, you know, they're in the same bucket, basically. The angst is there. Therefore, developers push, organizations push, products change, innovation happens. You know, the new Amazon occurs. Yeah. Because I don't have past Stack Overflow survey data. Do you, Avi? Do you, Avi? I don't. No.
Dang.
Someone in the audience is like, I've got it, but I can't talk to you.
The way they ask that question, I wonder if that's how they've asked it before. I'd be curious if you could...
Can you restate that question? Because I think that's a good point, is the question and the only answers. It's multiple choice. This is not an open-ended question of why. This is a scoped response. And so the 80% is extrapolated from that scoped response. Can you restate the question and the options?
Yeah. So how satisfied are you in your current professional developer role? Not happy at work. Complacent at work, happy at work. As someone who spends a lot of time on survey design, I do see a few potential issues. So they're asking about satisfaction, but then the responses are about happiness, which in satisfaction or happiness are really different constructs.
that middle option is complacent. So it's not happy, happy, or complacent. But complacency, it's not really the perfect middle between not happy and happy. I think it captures the essence of in between not happy and happy, but it's not necessarily the perfect middle. So it's an interesting way that they've asked the question, because is it measuring job satisfaction or happiness?
Happiness, I think, is... really hard to actually measure. So I think that's why they worded it around satisfaction. You know, happiness, there's a lot of literature about how to actually measure happiness. There's entire fields where they've spent years trying to figure out how to measure happiness and what happiness is. And usually happiness is the sum of moments of feeling happy, right?
Like how, like if you took the day and divided it up into however many minutes and Like in each minute or how many times throughout the day did you feel, have a feeling of happiness as opposed to non-happiness? And that's kind of how you get to how happy you are, right? Rather than a point in time reflection of happiness, which is pretty difficult. Yeah.
Anyways, I'm nerding out here a little bit on the survey design.
Well, I like that because you have experience that we don't have in trying to like craft those so that they are optimal because you only have so much time and opportunities to like pull somebody for their thoughts. And if you pull them out incorrectly on accident, then you're kind of wasting everybody's time. Let me rant for a split second here about Stack Overflow and URLs, okay?
So first of all, I appreciate you all doing the survey. No real hate here, but... I was trying to answer Adam's question, which was, do we have past year's results? So I went to this year's results, survey.stackoverflow.co slash 2024 slash professional developers, found the link to that survey question. And then I went to the URL bar and I changed the year from 2024 to 2023. 404, page not found.
I mean, come on, people. Respect the URL structure. This is like what address bar hacking is all about. Come on. Help us get to things in a way that makes sense. I just appreciate good URLs, and that's not a good one. Anyways, that was my mini rant because I was going to have answers for you, Adam. I was going to have last year's answer to this question.
I wanted you to have answers so bad. I don't have it, man. I don't have it. Maybe ChatGPT or something else might have a hallucinated version of an answer. The reason why I think you nerding out on that and camping out on the semantics of the question and the response is because it certainly – it corners the person. It forces the person in response time. At the same time, there are probably –
Lots of questions in the survey, so they could be experiencing cognitive overload while at that particular question, while also being slightly unhappy for their day. They may have measured their happiness moments in that day and be like, you know what, I'm unhappy. Not saying it's skewed, but it's important to scrutinize the question and the offered options as a response.
Because that is what the sentiment is drawn from. And so if it's skewed or not so much poorly worded, I would prefer you to say that I'd be versus me because you're the professional at crafting these in quotes surveys. Just kidding. Poorly designed. Just kidding. Yeah. You know, because that's really important, right?
The way you ask a question, the options you offer is where the sentiment comes from. And if it is ambiguous or not super clear, it's clear why the answer is potentially skewed. And so to understand how the efficacy of the answer set based on the question, I think that's what's worth scrutinizing.
All right, real-time follow-up. I used their user interface to find last year's results. And as far as I can tell, they did not ask this question in 2023. Maybe it was just one year they didn't, but we did not have last year's answer to this particular question. That's not why it 404'd, okay?
They still changed their URL structure, but had it stayed the same, it still would have 404'd because they didn't ask that question. So unfortunately, we can't really go back and say, You know, which way is it trending? Or is this an anomaly or anything like that?
Okay.
I want to talk about this idea of how can people talk about these problems in terms of dollars?
I would love to hear this, yes.
No.
No.
The topic is, as the title implies, how to measure anything, and in particular, how to measure things that are seemingly unmeasurable. So when we talk about what is the dollar ROI or interest rate or cost of technical debt and poor developer experience, that's Just a few minutes ago, we were essentially calling that unmeasurable. Right.
In this book, they talk about how anything if you want to measure something in terms of dollars or cost. that you can really do that with anything. So as long as you, you take something intangible like technical debt or developer experience, and then you correlate it to something objective or something monetary, uh,
So an example of this would be the DORA metrics in the DORA report, which I know you guys have followed. So what they essentially did is say- Give us a primer for those who aren't caught up on that.
What is DORA?
So DORA is the DevOps Research and Assessment. But since, I want to say, 2013, maybe they've been publishing an annual report on the state of DevOps. And right now, we're talking about tech debt and developer experience. But eight years ago, people were talking about DevOps and, hey, what is the ROI of investing in DevOps? So it's the same problem. History repeats itself.
And what they did was they said, here are some ways we can measure DevOps. So it was like metrics like MTTR and lead time, deployment frequency. So they said, here's DevOps. And what they did is correlate it to companies' profitability, stock performance.
returns and increases you know emps scores and and by doing that they were able to quote unquote prove and and show the dollar roi hey companies when they invest in devops and get x percentage better their stock prices tend to be x percentage higher right they tend to be x percentage more profitable and so it wasn't perfect yeah that seems a little bit brittle to me
Let me tell you what we're doing with developer experience. So we have this construct of what is developer experience. So we have our version of what Stack Overflow has here, where we have, it's called the developer experience index. So it's 14 of the top drivers of developer experience. So we say, okay, that's how we measure developer experience.
Then what we've been doing is correlating that measure to different outcomes. And one of them is actually self-reported time waste reported by developers. So how much time do you, it's a series of different questions we ask about. How much time do you lose each week? How much time is wasted each week due to inefficiencies?
And when we correlate the two and we found that like a one point increase in the developer experience index score, which is the average of these 14 different areas of developer experience, a one point increase in that score. So one point increase in developer experience translates to almost a one percent decrease in time wasted. And so, again, this isn't perfect.
You could call this brittle as well.
Well, I think it's a little bit better because you're directly asking the people.
Yeah, and it's more direct to dollars. It's not like stock price, which is a little bit of a leap, right? A lot of things, so many things affect the stock price. So, you know, using this approach, we can, folks can say, hey, if we improve developer experience by X points, that translates to X percentage reduction in waste, which translates to X amount of dollars, right?
So that's how we're approaching it right now.
I like that approach. I think that time waste is reported by the actual people wasting the time. And so it's probably relatively reliable. Of course, there's always trolls and thoughtless respondents, but you can't get around that.
Or estimators. Yeah.
Yeah, I'm totally just being inefficient because of all these other things. It's not me. It's you. There's that. I mean, you can't really maybe you just account for that in your numbers. But yeah, if you are saying technical debt, complexity, bureaucracy, whatever it is, all these factors. Ultimately, for the business, are costing money, slowing things down.
Wasting time is really a decent measure for that, like how much time is actually being wasted. And so if you can track that against this DXIX, what's this thing called, the DX Index?
DXI, yeah, DXI, Developer Experience Index.
At the same time, I don't know, it seems like a pretty decent approach. Is that bearing fruit?
It is. Yeah, I mean, we can't think of any other way to do it. I think the feedback we get is, this is great. If we can make this an industry standard, then my CEO is going to buy it. But there's still an education and... you know, marketing gap there where folks, what I just explained to you, it's hard to get that across in like a five minute executive summary. Right.
Right. I think you should have a, do you do an annual or semi-annual survey for, to the public? Like Stack Overflow does?
No, we should. I mean, we already have the data because we are already surveying hundreds of thousands of people. Right. Yeah.
The nice thing about this particular measure or this combination of measures is that if it could be somewhat generalized and made public, it's now a tool and a resource for people who don't have those quantitative metrics inside their company to say, look, this stuff really matters. Look what it did for... Walmart and you know, these, these important companies, it's moving their bottom line.
It's making them more productive and they've, they've proved it out over N years. And so if that's public information that I can take to my leadership and use that, then convince them that, Hey, let's call a feature freeze or whatever it is that I'm trying to get done. Right.
Yeah.
Hey friends, you know we're big fans of fly.io and I'm here with Kurt Mackey, co-founder and CEO of Fly. Kurt, we've had some conversations and I've heard you say that public clouds suck. What is your personal lens into public clouds sucking and how does Fly not suck?
All right, so public clouds suck. I actually think most ways of hosting stuff on the internet sucks. And I have a lot of theories about why this is, but it almost doesn't matter. The reality is if I've built a new app for generating sandwich recipes, because my family's just into specific types of sandwiches. They use Braunschweiger as a component, for example.
And then I want to like put that somewhere. You go to AWS and it's harder than just going and getting like a dedicated server from Hetzner. It's like it's actually like more complicated to figure out how to deploy my dumb sandwich app on top of AWS because it's not built for me as a developer to be productive with. It's built for other people.
It's built for platform teams to kind of build the infrastructure of their dreams and hopefully create a new UX that's useful for the developers that they work with. And again, I feel like every time I talk about this, it's like I'm just too impatient. I don't particularly want to go figure so many things out purely to put my Sandwich app in front of people.
And I don't particularly want to have to go talk to a platform team once my Sandwich app becomes a huge startup and IPOs and I have to do a deploy. I kind of feel like all that stuff should just work for me without me having to go ask permission or talk to anyone else. And so this is a lot as informed a lot of how we built fly like we're still a public cloud.
We still have a lot of very similar low level primitives as the bigger guys. But in general, they're designed to be used directly by developers. They're not built for a platform team to kind of cobble together. They're designed to be. useful quickly for developers.
One of the ways we've thought about this is if you can turn a very difficult problem into a two-hour problem, people will build much more interesting types of apps. And so this is why we've done things like made it easy to run an app multi-region. Most companies don't run multi-region apps on public clouds because it's functionally impossible to do without a huge amount of upfront effort.
It's why we've made things like the virtual machine primitives behind just a simple API. Most people don't do code sandboxing or their own virtualization because it's just not really easy. There's just no path to that on top of the clouds. So in general, I feel like, and it's not really fair of me to say public clouds suck because they were built for a different time.
If you build one of these things starting in 2007, The world's very different than it is right now.
And so a lot of what I'm saying, I think, is that public clouds are kind of old and there's a new version of public clouds that we should all be building on top of that are definitely gonna make me as a developer much happier than I was like five or six years ago when I was kind of stuck in this quagmire.
So AWS was built for a different era, a different cloud era. And Fly, a public cloud, yes, but a public cloud built for developers who ship. That's the difference. And we here at Change.io are developers who ship. So you should trust us. Try out Fly. Fly.io. Over 3 million apps, that includes us, have launched on Fly.io.
They leverage the global anti-cast load balancing, the zero-config private networking, hardware isolation, instant WireGuard VPN connections with push-button deployments scaling to thousands of instances. This is the cloud you want. Check it out, fly.io. Again, fly.io. And I'm also here with Kyle Carberry, co-founder and CTO over at Coder.com. And they pair well with fly.io.
Coder is an open source cloud development environment, a CDE. You can host this in your cloud or on-premise. So Cal, walk me through the process. A CDE lets developers put their development environment in the cloud. Walk me through the process. They get an invite from their platform team to join their coder instance. They got to sign in, set up their keys, set up their code editor. How's it work?
Step one for them, we try to make it remarkably easy for the dev. We never gate any features ever for the developer. They'll click that link that their platform team sends out. They'll sign in with OIDC or Google, and they'll really just press one button to create a development environment. Now that might provision like a Kubernetes pod or an AWS VM.
You know, we'll show the user what's provisioned, but they don't really have to care. From that point, you'll see a couple of buttons appear to open the editors that you're used to, like VS Code Desktop or, you know, VS Code through the web. Or you can install our CLI. Through our CLI, you really just log into Coder and we take care of everything for you.
When you SSH into a workspace, you don't have to worry about keys. It really just kind of like beautifully, magically works in the background for you and connects you to your workspace. We actually connect peer to peer as well. You know, if the coder server goes down for a second because of an upgrade, you don't have to worry about disconnects. And we always get you the lowest latency possible.
One of our core values is we'll never be slower than SSH, period, full stop. And so we connect you period to period directly to the workspace. So it feels just as native as it possibly could.
Very cool. Thank you, Kyle. Well, friends, it might be time to consider a cloud development environment, a CDE. And open source is awesome. And Coder is fully open source. You can go to Coder.com right now, install Coder open source, start a premium trial, or get a demo. For me, my first step, I installed it on my Proxmox box and played with it. It was so cool. I loved it. Again, Coder.com.
That's C-O-D-E-R.com. My idea for you, Abhi, is a growth hack. Let's hear it. When you do this, it would make sense if I were you. This is probably how I would at least consider it. This is not a perfect one-to-one plan, but we're going to solve it. This Stack Overflow survey is obviously popular. We're talking about it. The results are shared and examined and analyzed by many.
It's respected, right? It's got a core audience. If you have similar data... I would release whatever you're doing or whatever data you can do around the time of this survey's announcement and to some degree Venn diagram. Number one, you associate the brand for DX with a very beloved, mostly beloved brand, Stack Overflow. Some love, some hate.
I thought you were talking about WordPress.
Oh, yeah. Yeah, exactly.
Not yet.
Not yet. And then you can draw correlators between the questions and the data they siphon from that question. Yeah. And then the question and or data set that you have that correlates and Venn diagrams across the two. One, to keep them honest, and not so much that they're not honest, but to keep this survey, which all surveys have a... an optimization opportunity, right?
We just talked about, right. There's no perfect survey. And I think you almost better off the entire community because you give not one data set, but two data sets. So how true is this? Your findings and cross-examination and Venn diagram may say, well, this is actually pretty close to true because we have corollary data and we can corroborate this findings.
Second, you get to feature things like DXI and you get to have an opportunity for now way more people know about DX and now find the benefit and or interest in your beliefs, which is this DXI index being such a core thing to you all. Juan, that's the idea. How do you like it?
I like it a lot. I'm writing it down. I like the cross-examination piece.
Yeah, I do too. And then I would say the second thing, and maybe this gives a foundation to a foundation, which is in order to have an organization support or adopt this DXI, this developer experience index, what do they need to have in place to get to that point? Like what is a mature data-driven platform?
organization look like that has the ability to actually index this index and have that for themselves right yeah i love it that's a question that's that was a question oh what did they actually like the question but that was also a question yeah i mean what's the foundation to get to this point to have a 14 metrics right isn't it 14 metrics yeah 14 different metrics we so we haven't open sourced it right now it's proprietary man yeah i mean there's that's actually
One of the biggest strategic questions I've been wrestling with for over a year now. Do we open it up or do we keep it proprietary?
Well, when you have the opportunity to become the index, I think, I mean, obviously, you know way more about your business than I do and how important it is internally as a proprietary thing. But I can see huge upside in the open sourcing of it.
Yeah. Yeah. Agreed. I think we can open source it while putting like a copyright on it. So you can't necessarily, you know, you're not technically supposed to use it for commercial, you know, within a, within like Walmart can't actually use, deploy it.
Right. Yeah. I mean, we can get into the licensing discussions and we're happy to, we do it all the time. Yeah. And depending on what it is, open source might not even be the right thing.
word right like maybe it's creative commons maybe it's right but you could still hold trademark and copyright against it like dxi could be a trademarked thing but also how you go about doing it and how others can go about doing it you can just let that stuff loose you don't let go of the copyright but you just let other people use it so and you can't call it dxi it is a product right you trademark the dxi right that's what you're saying right makes sense
they can be DXI compatible or, you know, whatever, but that's in the weeds. You were talking about the 14. Yeah.
I mean, the deploy it, they, you know, we, we have the survey items, the measurements for, for those 14 and they deploy our platform, right? That's why it's proprietary because they can't do it without, without our product from you. Yeah. DX. But in terms of like who it is, We work with the Pinterest, the Dropboxes, the Netflixes, the bleeding edge tech companies.
And we also work with... I mean, this isn't to diminish these organizations, but companies like Pfizer, P&G, Tesco, Bank of New York, BNY. So I think what we've seen is the DXI works in all kinds of environments, not just the bleeding edge tech companies, but... you know, more legacy traditional enterprises as well.
Sounds pretty cool. So you have found then if you have a DXI, which a lot of these companies do via deploying your guys's proprietary platform and you're tracking time waste, you found that a, there is an inverse correlation between the two that is measurable and repeatable and reliable. Yeah. That's pretty powerful.
I mean, we all know it's true, but like actually proving that it's true is a whole different thing.
Yeah, I mean, we did a meta-analysis, which we've published, and I'll give you guys a link to that. We actually have developerexperienceindex.com. I have that vanity URL. Nice.
Very nice. So long.
It is a little long. Gosh. But... Anyways, so we, you know, the R value was point, I think it was like point some five. That's a really strong relationship between developer experience and Timeway. So then on an individual company basis, we can look at their relationship, right? We just run that correlation for an individual company.
And we always see a moderate to strong relationship at any given organization as well.
And do you find that it follows Pareto's principle as well in terms of effort? Like 20% of your effort gets 80% of your results? Or as you continue to improve your DX, is it trailing off or not?
That's a great question. Like, do we see a different relationship at different bands?
At the edges, right.
Yeah. We haven't studied that, but I will add that to my notes as well. That's really interesting, yeah.
That would be worth knowing. It's even, I mean, I think it's, it's logical that that would be the case in almost any effort at a certain point you're, you're squeezing the radish, you know, but like what's the sweet spot for, for companies where they can put in this much effort into their developer experience and get that much out. Yeah. I think that would be worth knowing. Absolutely.
Can we dig into these 14 drivers? It is out there. Can we talk about them at least?
Yeah. Did you go to developerexperienceindex.com?
No, I just Googled get DX and then DXI and it landed me on this page that you can tell me if this is accurate. The number one
Yeah, that's the white paper.
The one number you need to increase ROI per engineer.
Yep.
And about two scrolls down, you dig into figure two, which talks about the drivers and the outcomes.
Yep.
And so I'll do the work for you if you don't mind. The drivers are deep work, dev environment, batch size. Local iteration, production, debugging, ease of release, incident response, build and test, code review, documentation, code maintainability, change confidence, cross-team collaboration, and planning. Those are the drivers. Those are the 14 dimensions. And those correlate to five outcomes.
Speed, ease of delivery, quality, engagement, and efficiency. Okay. Dude, that's a good map. That's a really good map to maturity in an organization, a debit organization. Like all those things on the driver's side are really good. Like what is my maturity level and what is my, I don't know how you would describe it. I'm trying to think on the fly here, but how good am I?
How good are we at these drivers? Yeah. And then the correlations are obviously awesome, like the outcomes, the speed, the ease of delivery, quality, engagement, efficiency. Yeah. But that's a good map. I like that.
It's taken years to arrive at those 14.
You could almost map softwares as a service on top of that sucker. I mean, there's like an offering for each of these. I mean, there's a whole industry around.
Oh, yeah.
Production, debugging, incident response, code review, et cetera, et cetera. I just find that interesting.
Which one, which SaaS service correlates to change confidence most? That's one that stands out to me a lot. Change is hard anyways and the confidence in change. You could be a senior engineer and feel good about it. You could also be a junior engineer and feel good about it. But what gives you the confidence? How do you measure that with a service, a tour, a SaaS?
Yeah, I think change confidence, it's about... It's a lot about how easy it is to actually test a change, like get feedback on a change. So I think everything from cloud dev environments, right? All these things kind of interrelate. So cloud dev environments that allow you to quickly spin up a staging environment for just your change to easily manually test stuff.
Obviously, things like test coverage. Now, you know, AI is coming into the picture there with... helping, you know, write tests or even manually QA, uh, your work.
So, so yeah, change confidence is about like when you make a code change, are you kind of like yellowing it when you ship it and just hoping it, it works or do you actually feel confident that when you make a change, it also just has to do with code quality, right? Like if you make a change in one area of the code, does it, is it a house of cards or, you know, cascading effects and
So a lot of things go into it, but it's ultimately about, you know, the developers feel like they can actually make changes or are they just, you know, duct taping things and hoping that it works when they deploy it.
Another newish feature of a lot of cloud hosts are preview branches. That's another way where you can get change confidence. Netlify, Vercel, et cetera, they're providing a place where you can have your development branch and it can be constantly publishing to a preview page. on a subdomain, on a website.
And so now you can both look at it yourself in production-ish and then also send it to your QA team or to your boss or whoever, your customer. I think that definitely helps with change confidence because... Previews. Previews are nice. But yeah, there's so many tools that overlap in these things as well.
Documentation, right? So just if you're modifying code, can you even understand how that code works so you're confident in changing that little bit of code, right? So yeah, a lot of things. All these factors interrelate, right? Even something like batch size, which is about incremental delivery. Like if you're working on huge changes, huge PRs, there's so much more risk, right?
You just have a lot more surface area. So if you're delivering incrementally, your confidence is going to be higher for each unit of change, right?
Is this your next big thing, the DXI?
It's been in the works for a while. DXI is one of the big things. The other is the Core 4, which is the other. If you go to that research tab.
That one rhymes, so you know it's true.
DX Core 4. Yeah. So if you go to that research tab on our website, there's the developer experience index. And above it, the DX Core 4 is something else we've been developing.
And that is speed, effectiveness, quality, and impact, right?
So that's the outcome of this. But the real problem we've been trying to solve is, I think last year I came on here, Adam, and we talked about the DevEx framework I'd published with Nicole Forsgren and others. And so ever since we published that, people have been coming to us and saying, hey, like,
Nicole created Dora, Nicole and Margaret created the space framework, and then you, Nicole and Margaret created the DevEx framework. We have three frameworks now for telling us what we're supposed to be measuring in our organizations. How do we actually, so what, to sum it all up, like what should we actually be measuring? And the funny, I was just talking.
Add one more. Add one more and you got the core four.
This is the one to rule them all. This is the one framework to rule them all because... Replace the other three. Not replace. This encapsulates all of them. This combines them all into one framework because... Yeah, everyone would ask us that question. And the way we would always answer that question was, well, it depends.
I was just talking to a CEO at a big tech company who said, I was talking to Nicole. And I asked her, to sum it all up, what should we measure? And she told me, it sort of depends. And I get that it's situationally dependent, but it would be really valuable to have something out of the box and standardized that we could benchmark across other companies and
really have a way forward so and funny enough i've had that same experience i was talking to a cto actually a capital one who asked me hey i've been following your research for two years so just tell me what should we actually measure and i said uh We can, you know, we can do a consulting engagement on that to like figure it out.
But, you know, having a starting point that is, you know, out of the box is really valuable. So that's what the DX core for is.
I love that response. And as a person who has to deliver that response frequently, my next response is always, I have to ask you 20 questions to answer that one question.
Yeah.
So you need to give me more time. If you want my true answer, the only way I can know what to respond with is to ask you several more questions. Yeah. And those questions may lead to even more questions. And so if you trust me, you've been following my data, give me a little bit of your time. And I will answer those questions by asking tons of questions.
The problem is no one has time. Well, they all want the silver bullet, right? Yeah.
Give me a yes or no answer.
Yeah, they want to go to the doctor and get the diagnosis, not go to the doctor and then have 16 follow-up appointments before you get the diagnosis.
And I get that. I mean, if you were a time waster, then that's different. But like, you can only answer that CEOs of top tech companies answer a question well, if you understand more about their specifics of their business. What are their particular drivers? Not anymore, baby. Now he says core four. Core four. Yeah. Yeah.
I think the Core 4 provides a pretty good answer. I mean, we want people to customize. This isn't, hey, do this and do nothing.
Oh, I like this, actually. Core 2, Core 3, maybe.
But the DX Core 4 does... And now we've started rolling it out to people. And it's landed well. I actually asked the CEO, I showed him the core four right after hearing about that experience that they had talking with Nicole. I said, hey, we've been working on something for this. And I asked him, to you as CEO, does this seem right? Does this seem correct?
Does this seem like the right way to be thinking about and measuring developer productivity? And he said, yes.
Wow. I was going to give you an idea, but that might actually be the answer. Because rather than say it depends, what if you said, we have a survey that takes you five minutes to answer. Instead of saying it depends, you say.
It spits it out. It spits out your customized core for it. Right.
This is your on-ramp, like your specialized, personalized on-ramp. Yeah. Because I'm sure you can take that consulting session and to some degree distill it down into something a CEO who has very limited time can answer in five to 10 minutes, right? Hey, I don't have an answer for you in this moment, but we have a very fast 10 minute or less.
It really could be 15 if you want it to be, but most people it's 10. And if you answer these questions, I'll know exactly how to help you.
That's like when, you know, some of those personal wealth front and betterment, some of those robo investment advisor platforms, right? You go through like a three minute wizard and then they tell you what your investment portfolio should be. Should be. So that I like that.
More ideas for you, Avi. Two, you're taken away from here.
Yeah. Yeah. Lots of good ideas.
Write that one down.
I'll be right. It down. I'll just write it down. Can we dig into these a little bit? So sure. The core four speed effectiveness, quality impact. You say those are outcomes, not necessarily.
Those are the dimensions. Those are the categories. Right. That, that think of them as your stocks, bonds, cash, right. To use the stock portfolio in the analogy, you need that balance because if you only measure speed, anything else goes to crap. Right. Right. Like you're not good.
you're not doing it correctly there's your move fast and break things right there like we're moving very fast but we are breaking not breaking things yeah so a balanced portfolio this is a nice metaphor yeah you're each for each of these you have a key metric this is something that you're going to track and then you have secondary metrics so there's some balance there as well but for speed the key metric is diffs per engineer yeah and i don't know if i might take issue with that one
Yep.
What? Tell me more.
Yeah. You know, I mean, I was probably last time I was on the show with Adam, I was probably dissing that metric, right? Like PR group, but PR strategy.
Yeah.
I'm a politician, right? You know, I flip flop on the issues. Yeah.
It depends on who you're talking to.
Yeah. But no. So it's been, it's been a journey for, I mean, just for me personally on this topic, because, um, You know, the whole reason I actually got into spending six, seven years on this whole problem space was because I felt like metrics like diffs per engineer were reductive and not correct and not helpful. All right.
But one of the things that the core four optimizes for is so we work with a lot of technical leaders, engineering leaders and engineers. As we were talking about earlier, one of their big challenges is talking about rationalizing investments in developer productivity in a way that the CEO and CFO are going to agree to.
And to do that, you need a shared definition of productivity that your CEO and CFO agree with. And to achieve that, I've found that you do need some type of output measure. We're not at a point in human evolution yet where most CEOs and CFOs are down with this idea that developer experience indexes is the one metric that matters for the maturity or effectiveness of the organization.
A lot of CFOs and CEOs still think, I mean, there's Fortune 550 companies still measuring lines of code, benchmarking that, right? So we're still at a point in human state of the art around software engineering where output measures need to be a part of that conversation. It needs to be part of the way you're framing developer experience and developer productivity.
If you want the people you're pitching this to, to actually understand fund it and believe it and buy in. So there's a marketability optimization here. That's one of the reasons PR is for engineers in here. But the other reason is we have come around to talking with a lot of companies like Uber, Microsoft, top tech companies where they use this metric as an input. It's not the sole metric, right?
They're not performance valuing engineers based on this metric, but in aggregate, they are looking at this metric as an input to understanding how developer productivity is trending and compares to other organizations. And it's not useless, right? It is a useful indicator in aggregate.
And that's why in the framework in the core four, we there's an asterisk and it says not to measure this at the individual level. So this is only to be looked at at a aggregate team group organization level and benchmark that way. And we've we've found it more useful than not.
Hmm. So it says diffs per engineer, though. Diffs per engineer, then asterisks.
Not at the individual level. Right. Well, so, yeah, the metric is normalized. So you're looking at aggregate divided by the population. But in terms of like visualizing or reporting this, you're not looking at a list of people, right? You're looking at teams and organizations. Yeah. Right. I do see the contradiction there, though. Yeah.
Well, certainly at an individual level, at face value, it seems contradictory, but it does make sense. Yeah. Maybe you could reword that to say like averaged across whatever.
I'll write that down.
Oh, man. So many good notes for you here. Yeah. At an individual level, certainly it's a bad measure. Well, the problem is it becomes a bad measure, right? That's Goodhart's Law. Yeah. Once everybody knows that that's what's being measured. Well, we all know how to play the game.
Yeah.
Same thing with, I mean, it's lines of code moved to a slightly larger batch, you know? Yeah. And so I can criticize that one. I can also criticize lines of code. I can criticize features or tickets. They can all be criticized, but then you're at a certain point. You're like, well, what can we actually do that? Everything sucks. We're going to have to pick one and go with it.
And I guess if the industry is somewhat standardizing around that, then it's a decent compromise.
And I think there's more we can do, right? I was just talking to a company, actually working with a company, Silicon Valley tech company, and all the other core form metrics were quite a bit below like P50, like industry peers, but diffs per engineer was higher.
And this is bad for them because they're trying to show to their executives that they're behind peers so they can get funding to make improvements, right? Sure. So we were just trying to dive into the data. Like, why is your diffs per engineer inflated, even though...
clearly like empirically and with the other core four data points like you're not like a high performing organization and right so we we couldn't really figure out an answer i mean there was a lot of speculate like you know there's just there a higher number of like config changes like small prs that aren't real changes but like every company has that right like we that should be kind of that uh fuzziness should already be kind of accounted for in our benchmarks and so that led to this idea like
you know, could there be a weighted metric? So, so you're actually, because not all dips PRs are credit equal, like we talked about, right? Some are one minute changes. Some are, some are one line changes that are actually eight hours. Some are, you know, 800 line changes that are two minutes.
Like how do you, so, you know, if we could apply some kind of weighting to like bucketing all these dips and PRs. So almost the same way we do estimation, like t-shirt sizes or something like that. You know, I was thinking, could we use Gen AI like LLM to basically automatically try to categorize based on the title, the description of the task and the code changes?
Like, you know, was this like a big change or was this actually a small change? And then you could get kind of like a weighted number. That would be an improvement to the signal you're getting out of like an output measure like this.
Yeah, even with a confidence score alongside that would be really interesting. There's still challenges with that because the amount of change does not always correlate to the amount of effort you can work an entire week on finding a bug, and then you found it, and it's a one-character change. And you're so exhausted by then that your commit message says, I fixed it.
or something you know and so like the llm just doesn't have much to work on boom you know if i guess if you can just say well it's fuzzy it's not going to be a percent it's better than merely measuring so did you come to my only guess would be like a culture of small diffs or a culture of you can't figure it out yeah why are they so much higher on that one metric you haven't figured it out yet
We don't know, but they are definitely higher. And I mean, I told them, look, if I had a little bit more time here, I would take a random sample of your 200 PRs and then random sample of other companies and try to do what an LLM would. I would look at the titles and descriptions and try to figure out, are your PRs generally smaller, lower effort or size tasks than other companies?
I mean, that probably has to be the reason. I can't, it's an interesting problem though. Yeah.
My guess is you dig into that and you find there's some sort of scheduled pull request. That's just like changing something that should be in the database, but it's not.
And so they're just, here's the thing. Here's another twist.
Okay.
Okay. So the, the twist is we measure this two ways. We actually measure diffs per engineer self-reported, meaning we just ask developers, on average, how many PRs do you merge that you were the author of? And we look at their actual GitHub data. And for this company, both numbers were within point. They were the same number. Which is remarkable in of itself, right?
Yeah, I mean, that sounds fishy right there. Like, what are the odds that they're like that close?
Well, not exactly, but within like 0.23, yeah. I mean, why wouldn't it, right?
Well, I thought, okay. It all depends on how exact it is. You know, if you have a vote... and you have 99 to one, you're like, okay, but if you have a vote and it's a hundred to zero, now you're like, there was some collusion here. Like something happened.
Well, maybe people looked right. Maybe people looked at their own GitHub activity and answer the question.
Which is fair because maybe they don't, they don't know. And so they're going to look at it. I'm just saying, if it was like exactly the same, then I'd be like, there's something wrong with our system here.
Yeah, it wasn't exactly. So it was it was it was very close. And so that does exclude like bot authored pull requests, for example, and both measures, both the GitHub, the objective one and the self-reported explicitly exclude bot authored.
I think you got someone in that org who just doesn't go to work and they just have a bot that uses their own SSH key and just does, you know, every day merges and stuff. And then you ask that person how much they merged and they went and looked at their bot and they just guessed the right answer.
Yeah. Looking at outliers would be interesting, though the self-reported accounts for that because there's an upper bound. Like the top option is actually, I think. like 10, like 10 or more per week. You can't put in like I merge 100 per week.
You can't be a self-reported 10x dev.
Yeah, yeah, exactly. Right.
Oh, what an interesting problem, though.
Yeah, it was fascinating.
What's up, friends? I'm here in the breaks with Dennis Pilarinos, founder and CEO of Unblocked. Check him out at getunblocked.com. It's for all the hows, whys, and WTFs. So, Dennis, you know we speak to developers. Who is Unblocked best for? Who needs to use it?
Yeah, Unblocked helps large teams with old code bases understand why something has been done in the past. It helps them understand what happens if they make changes to it. Basically, all the questions that you would typically ask a co-worker, you no longer have to interrupt them. You don't have to wait for their response.
If you're geographically distributed, you don't have to wait for that response. You don't have to wait for...
you know you don't have to dig through documentation you don't have to try to find the answer in confluence and jira what we basically do is give you the answer by you just asking a question the way that we got to the problem was a consequence of our kind of lived experience we're actually going to call the company bother which is like you don't bother me i don't bother you right instead of like
being tapped on the shoulder or interruptive Slack messages. We could just use Bother and get the answers that we wanted.
We didn't go with that name because it's a little bit of a negative connotation, but helping developers get unblocked by answering questions or surfacing data and discussions within the context of their IDE relative to the code that they're looking at is something that thousands of developers love so far.
I think our listeners are really familiar with AI tooling, very familiar with code generation, LLMs. How is Unblocked different from what else is out there?
A lot of code generation tools help you write the code to solve a problem. We sit upstream of that. Our goal is to help provide the context that you need. If you think about where you spend your time when you're trying to solve a new problem, understanding how that system works, why it was built that way, what are the ramifications of changing something?
That's the problem that Unblock tries to solve for you. We take the data and discussions of all of these, the source code and all those systems to provide that answer for you so that you can get that context and then you can go and write that code. We had this great example of a company who hires, you know, very competent developers.
It took them five days, that developer, five days to write 12 lines of code. And his feedback to us was, it's not that it takes you five days to write 12 lines of code. It took them five days to get the information that they needed to write those 12 lines of code. And that takes probably about 30 minutes to write those 12 lines of code and rip off that PR.
Okay, the next step to get unblocked for you and your team is to go to getunblocked.com. Yourself, your team can now find the answers they need to get their jobs done and not have to bother anyone else on the team, take a meeting, or waste any time whatsoever. Again, getunblocked.com. That's G-E-T-U-N-B-L-O-C-K-E-D.com. And get unblocked.
I was thinking, though, as you guys were talking about this, that measuring speed is, and it depends, right? Because not every team can be measured speed-wise on the exact same metrics, which I think is why you have this key metric and then secondary metrics. Yeah, yeah, yeah, round it out. Because you have the secondary metrics to sort of back up and correlate to what the key metric speaks of.
And the collection process is via systems, so collected data from a Git repo or other intelligence platforms, and then self-reported.
Yeah, that's a good suggestion. We didn't look at the secondary metrics. We got really trapped in, like, why is this diffs per engineer inflated?
Well, I almost wonder if the key metric is what swaps out. Because, like, on one team, the diffs per engineer may actually be the primary driver of the data you're trying to collect. In a whole different team, lead time or processes or deployment frequency is actually the better key metric, and the others are the supporting metric. I don't know enough about your business how to do that.
Yeah. This makes me want to go look at the perceived rate of delivery. Perceived speed is one of those secondary metrics. This is the analogy here would be for like aerobic athletes, right? Heart rate versus perceived rate of exertion, right? Those are the kind of two. And like, there's a lot of flaws in heart rate because I mean, just the altitude, right?
You could be training at different altitudes and the heart rate's different, even though you're kind of doing the same, same load. Or you just wake up, you didn't get as much sleep, so your heart rate is more fluctuating. So, yeah, we really like that perceived rate of delivery. We literally just ask people to rate the speed at which their team delivers.
Like 1 through 10, just rate it?
It's not a five-point scale. It's not... from extremely fast to slow. It's the actual speed.
It's the words, not like a one through five thing.
Yeah, very much inspired by perceived rate of exertion, which is on a 10, right? There's 10 options for perceived rate of exertion.
Or perceived rate of pain. Have you ever seen that?
For medical? I think they use that in healthcare.
There's a great Brian Reagan stand-up where he talks about them asking him that question when he goes into the ER, you know, and him thinking through like, what number should he say in order to get help as fast as possible? He's like, never pick seven. You know, like you're always an eight.
And here's the full length unedited clip of Brian Reagan on this awesome bit. I was going to edit it, but I was thinking like, gosh, I would just edit this man's comedy and I just can't do that. So if you don't want to hear the whole thing, skip to the next chapter. There you go.
Nurse finally comes in, how are you doing tonight? I'm on a journey. Do you have a painkiller or something? This is killing me. So she goes, how would you describe your pain? It's killing me. She goes, how would you rate it on a scale of 1 to 10, with 10 being the worst? Well, you know, saying a low number isn't going to help you. Oh, I'm a 2. Maybe the high 1s.
You could get me a baby aspirin and cut it in half. Maybe a Flintstone vitamin and I'll be out of your hair. You can go 10 to all the 3s and 4s and such. If anyone's saying such ridiculous numbers. I couldn't bring myself to say 10, though, because I had heard the worst pain a human can endure is getting the femur bone cracked in half.
I don't know if that's true, but I thought, if it is, they have exclusive rights to 10. And now I'm thinking, what was I worried about? Was there like a femur ward at the hospital? They would have heard about me and hobble into my room.
Who the hell... ...had the audacity...
to say he was at a level ten! You know nothing about ten. Give me a sledgehammer. Let me show you what ten is all about, Mr. Tommier.
No! No!
How can I possibly say ten? I can't. So I thought I'll say nine... And then I thought, no, childbirth. I better not try to compete with that. And then I'm thinking, you'd almost be hell giving childbirth when your femur bone's cracked. So I said, I guess I'm in age. She goes, okay, I'll be back. I'm like, oh, I blew it, man. I ain't getting nothing with age. But she surprised me.
She comes in, she goes, the doctor told me to give you morphine immediately. And I'm like, morphine? That's what they gave the guy in Saving Private Ryan right before he died. Okay, I'm a four. I'm a zero. I'm a negative 11T. Morphine. So they gave me morphine. Wow. All I know is about 15 minutes later, just for the hell of it, I was like, I'm in eight again. Guess who's in eight?
And they finally check me out. I'm walking out in the hall going, say eight, say eight, say eight, say eight. Happy eight day.
And so it's very much Goodhart's Law in a much funnier context.
Yeah. Do you think, Avi, that your North Star with DX as an organization, what you're trying to do is to define a path to happy developers? What do you think you're actually trying to accomplish?
I mean, I know what you're doing as a result of giving survey results and this data, this, you know, this formulaic and proprietary way to ask questions of an organization, how to disseminate this information and analyze it, that you're trying to help organizations be optimized. But like, do you think the true optimization factor is the path to happy developers?
Happy and productive, right? I mean, so that's the SAC overflow survey once again confirms this, right? Because people are unhappy because they're unproductive is another way to characterize the findings, right? People are frustrated because it's hard to get work done because of their tools, systems, whatever. Therefore, they're unhappy and they're unproductive, right?
There's a lot of time being wasted here. So no, I would say our North Star is helping every... Every organization just become every tech organization become the best version of themselves. Right. I mean, that has different meaning to different people. But, you know, yeah, I mean, I'm the CEO of a company and we have engineers. And so the way I think about it is.
With the people we have, are we doing the best we absolutely could be? Are we as good as we could be? And we run the DXi and all the core four, and I'm looking at that. How can we get better? To another company that probably just translates to we spend a crap ton of money on engineers, and we want to make sure we're maximizing that investment. Right.
Or it might mean, look, our competitors seem to be like creeping ahead of us. How do we go faster without just hiring more people? So lots of ways to tell the story in like a one liner. But yeah, it's about being good at building software. And as a result of that, people are also happier because all research repeatedly shows that happy developers are productive and productive developers are happy.
as the Stack Overflow Survey also shows.
I go back to the beginning of the conversation, which was 80% are unhappy. And what we failed to ask was why are the other 20% happy? Yeah. Because I feel like if your North star is productivity, but that comes as a result, generally, in my opinion, and I don't know this qualitatively is that you have productivity when you're happy and you can't create slash make happy developers, uh,
unless you understand what makes them happy. So why is the 20% of the 80% that's not unhappy, happy? What is going on? Why are they happy?
Yeah. I mean, whenever we look at this type of data, we're slicing and dicing. I mean, you do see some couple things I could share. You know, we do see some differences across, you know, cross culturally. So, for example, we tend to see higher sentiment around this type of stuff, at least self-reported from populations in India, for example.
we tend to see, you know, higher satisfaction with more junior developers, right? People who just don't have a frame of reference yet on what is good, right? They're still, they're new to the profession. So there's certain things that if we just looked at this data, it might be that the 20% happy are coming from, you know, certain countries or,
certain levels of tenure and seniority, that could explain quite a bit of that 20. I mean, and some of them are probably legitimately in good situations with good developer experience and greenfield projects with no technical debt where they're really happy.
I'm in full control. I have autonomy. No one yells at me. I'm getting paid. I'm not getting laid off. I'm not too old. I'm getting fired at 25 because I'm too old to code. That's the joke now. It's like, you're just, you're, you're, you've aged out. I'm 25. No, come on.
Well, there's another interesting data point on their survey, another interesting question, which is about coding outside of work. And if you want an indication of somebody doing something because it makes them happy, it's something that they would do outside of work.
And so the same exact work of developing, while there's 80% unhappy at work, 68% of respondents said that they write code outside of work as a hobby. That's like almost 70 out of 100 people. That's a large number. And 40%, which there's some overlap there, these aren't mutually exclusive, code outside of work for professional development or self-paced learning from online courses.
So these are people investing in themselves, caring about getting better at what they do. And that's kind of amazing. So we have this dichotomy of people who love to write software, generally speaking, and yet unhappy writing software inside of their organization. And obviously you can look at your DXi and follow the 14, but the closer you can make...
your engineering teams feel like they're doing their hobby. Think about how a hobby works. It is self-directed, first of all. So autonomy is huge. Most likely, unless they have a bunch of kids running around, there's deep work involved. You can lose yourself in it.
You can go into the, I was going to say the garage, but if we're coding, well, maybe the garage, wherever it is that you write software.
and just pound away at it for four hours without any interruptions and really lose yourself in it a lot of these 14 metrics actually are manifest in hobbies and so if you can obviously a business is a business and so you can't just be like everybody do what they want it worked for a little while for github until they got to about 100 i think 100 engineers i was there for i was there for the ride not at github but here podcasting and paying attention and using it as a product
And going to conferences where Zach Holman was traveling around and talking about their engineering-led development and everybody pretty much just works on what they want to, that worked for GitHub for a long time. Long meaning in years, not in employee count, like up to 100. It's not a large engineering team. They're way larger now.
But at a certain point, that thing falls apart because there are... There's work that needs to be done that nobody would just naturally pick unless it was assigned to them and they're paid to do it. And so eventually that does. But if you can make your engineering team feel at least approximate like they would be doing this as a hobby, then I think you're going to have a lot of happy programmers.
That's how I've heard this described. How can you kind of get the same feeling of joy and flow that you do when you're working on a side project? How do you get that same experience while working in your job? How can you recreate that? And if we could do that, we would unlock a lot more productivity. We would get a lot more out of engineers working at our companies. Yes.
I think that's a good way to think about it.
And a lot more happiness, too. Everybody wins there. There's no losers. These drivers, these 14 drivers, have you ever done a survey where you've asked developers to rank order?
We do that.
Those drivers, they do.
That's already, yeah. For every organization we work with, that's one of the... So first, we capture the data on it, and then based on... how they you know kind of stack up on each of the 14 we give them hey like based on here's how you answered these 14 now out of these 14 what are the top one to three that you know would most benefit your productivity
Do you find that to be pretty subjective or are there certain ones that always float up to the top?
There are definitely certain ones that tend to float more toward the top.
Such as?
Documentation.
Really?
For sure. Yeah. What's interesting is, you know, the Stack Overflow kind of technical debt is not one thing, right? Technical debt is actually like all 14 of these things. Well, minus maybe two or three of them are actually types of technical debt, right? So documentation is actually a form of technical debt. Complex code is a form of technical debt. Slow CI CD is a form of technical debt.
So all the technical factors do tend to float toward the top. Actually, but some of the cultural fact, you know, cross-team collaboration, like delays due to different teams having to coordinate with one another is also, I'd say, a pretty common theme. Makes sense.
Yeah. Is that across engineering teams or product teams or is that like dev systems versus or ops versus devs?
Yeah, good question. These are the people... My response just now was deriving from how engineers report the friction. So from the perspective for developers waiting on other teams, which could be cross-functional or it could just be other engineering teams that they have different services or whatnot. So that tends to be a big area of friction.
Queues. It's always about queues, right? Yeah. CRCD is a queue, you know, being delayed or whatever is a queue.
Yeah.
I can't work on this until you work on that. I can't work on that until you work on this.
Right.
We can't deploy that because of this. It's all queues.
Wait on code review. That's right. Then there's deep work. That's just meetings, right? Just like, hey.
Not just meetings. Less meetings, please. No, not just meetings. What else? Yeah. It's people were asking, it can be people actually asking you questions. It can be code reviews. It could be, Hey, can you do, you know, can you fix this quick thing? Can you customer ask for this thing? Can you take care of it? It could be support. It could be incidents. So it's much more than meetings. Yeah.
That's, that's something we see a lot of companies just say, Oh yeah, we just need like no meetings Wednesday. And then this problem solved. Right. Yeah. Yeah. That's really the case. I was actually looking at DX, what our top ranked areas were.
What matters most to your teams?
Well, no. What do our developers say are the biggest areas that should actually be improved? And for us, it's actually that code maintainability. So the ability, how easy it is to actually understand and modify code.
uh it's actually also project clear project direction so the projects they work on having clear goals and direction and it's actually that batch size which we've since renamed to incremental delivery but you know are you working on kind of small continuous changes as opposed to large ships we're the top three for us those first two are driven by leadership aren't they at me
I was thinking you're showing us cards here.
I was like, dang.
Yeah, yeah. Well, I'll say this. Our DXI score is three points above P90.
So you're sitting pretty.
Yeah, we're sitting really pretty. But yeah, even then, there's always room for improvement, right? And actually, I just, I'm looking at the data now and actually our clear direction.
He's smiling, y'all. He's not upset. He's smiling.
Yeah, those top three I just mentioned actually are not above P90. Those three specifically. Okay. Yeah.
So you got some, there's some room for improvement here.
Yeah. And same with code review turnaround actually is not above P90. Here's the better question, really.
And I know you're poking fun, Jared, but. I was. Yeah, and that's totally cool. And he likes it. You can't change what you don't measure, right? Yeah. So now that you have this index and now that you have, you know, this awareness, even as a leader, You couldn't change it before if you didn't know it, but now you have awareness. Your team has awareness.
Your team that is answering these questions feels heard, right? If you're going and making change and you say, hey, because of these results or because of this findings we're getting from, Our DXi score, we're improving these things in these ways. And the morale changes, the ability to speak to leadership and influence changes, you know, all those things really come into play.
Yeah. Now, this gives me a lot of reassurance as a leader, actually, because I wasn't sure before we ran this last, we call them snapshots, right? This last kind of benchmarking survey and data collection exercise. I really wasn't sure. I was very pleasantly surprised by how good things are right now. I mean, that's what I would expect out of myself as a leader, right? But I wasn't sure.
Am I just thinking we're good? Is it actually terrible working here? Or are we actually as efficient? Are we actually kind of at that high level of efficiency that I would expect out of the way we do things here? And we are pretty efficient. So it's good to see.
Well, we all want to think that we're doing well in that which we set out to do well. But... The worst place to be is to not be doing well and not know it, right? So at this point, of course, you are reassured because overall you're doing quite well. But even if you weren't, at least then you would know, okay, I thought I was doing well, but I obviously have some things to fix.
Now, if we picked one of those three, let's not do commit change size or whatever that one is. Let's go to the other two and say, okay, these are room for improvement. So pick one of those two and just spitballing, like what could you, Abinoda, as a leader do, right? to today, tomorrow, in order to like meaningfully move that at your next snapshot. Do you have any ideas?
Yeah. I mean, the project direction, you know, that's on me. And yeah, I mean, some of you were asking earlier, like Pareto's principle, but like some of these are trade-offs, right? Because yeah, we can improve that, but that would involve a little more process, which would cost time and money. And given that it's already actually very good, you know, is it,
There's just like it's more like something we want to keep in mind and be aware of so we can just lean in a little bit more there. The code maintainability, like that's already something we're really focused on. So that that was like validating is, you know, that's something we need to continue focusing on is.
And how do you focus on that? Like, what are your actual tactics?
Good question. Having clear patterns. I mean, just really pretty strict code review. And not just code review, but just making sure. I don't know if you've seen Addy Osmani's post. Code is like a love letter to the next developer.
It rings a bell.
Yeah. So that's in our onboarding doc for engineering. It's like, look, the only thing that matters here when you write code is like, how easy is it for the other people on the team to understand that code? And we really try to make decisions on how we build things around that principle. So it's good to see that then reflected in in the data, right?
People are saying that it is easy for them to understand and modify other people's code here. So that's one of the ways, but yeah, it's a lot of hands-on like driving that principle forward, right? We've, I've vetoed a lot of technical decisions, introducing new technology based on that principle that every new technology we add, every new pattern we add is something else.
Someone else has to learn and is going to slow them down.
I'm not sure where you scored on speed, but I assume it's pretty well considering these were the only three that weren't great. Have you ever considered compromising a little bit of speed? Like there's your trade-off is like, let's slow down a little bit. Because a lot of times just time to breathe and refactor and maintain actually improves code maintainability.
If you have maybe your snow leopard moment, for instance, I'm not saying do a feature freeze or anything, but like small bits.
So get this. I'm looking at the data. So when I look at our core four, we are above industry P50 for speed, effectiveness and impact, but not quality.
Yeah.
When I look at our second, some of the secondary metrics, so perceived speed and perceived quality, we are also above P90 on all of them except quality. So, yes, I think we are. We do sacrifice a little bit of quality for the sake of speed. And I mean, that's the data shows that very clearly.
Yeah.
Now, whether that's a problem or not... That's up to you. ...is an interesting question. Yeah, exactly. As a startup, I, you know, I love... I'd much rather be P90 plus on speed right now than... Because we have quality issues, but they're not... They don't affect customers that much. That's the secret about our quality problem.
Like, we do have quality problems, but I actually have the principle on the team, like, look, we're not building payroll software here. Like, when we have a glitch in a report, it's not, like...
it's not hugely disrupting or impacting our customers businesses and if we can quickly resolve it then we're good so we that's a principle we have here like we have really fast recovery but yeah we do have not great quality objectively speaking yeah not abnormal for a company in your situation i don't think do you have happy developers It's a good question.
So we don't measure, for the reasons I kind of shared some concerns around measuring happiness, we don't measure happiness. Do you have productive developers? Well, so yeah, we do. I mean, our DXI score is through the charts, as I was saying. But we do, I mentioned earlier, we do measure attrition risk. This is interesting. I haven't actually looked at this.
Uh-oh, this is a real-time demo.
Okay, I see. Yeah. okay i mean this i can't share out loud but okay so for risk of attrition we look at it very similar to like a blood test that you might get so when you get a blood test right they tell you like here's the healthy range right like you know if you're whatever blood pressure cholesterol is within at you know this nanomilligrams to to this you're normal so with attrition risk i
The normal range, and I may be having that coffee this morning, but I think it's 7% to 10% is the healthy range. So if your attrition risk is 7% to 10% of your organization has signals of being at risk of attrition, that's normal. And I'll just say we're at the high end of the normal range, but I'm looking at the data.
Our, our reporting will break it to like, tell you where that kind of risk of attrition. And so I'm looking at it now and it's, I, I, I'm aware of what is going on here.
He knows the inside story.
Like he, well, how large is your team? I can't. It's around 20 people.
Well, I mean, just roughly. I'm not trying to drill down. My point is like smaller teams like these generalized aggregated numbers. You can see like, oh, well, there's a skew there because of this one person situation or whatever it is.
Yeah. No, this is meaningful, though. No, this is actually this. I was actually worried about some flight risk. in this area on this team. And this data is actually telling me.
So you're feeling better.
I'm feeling better about our product because today we've gotten to go through all the data. And it's been really, you know, to be honest with you, when I originally got the data, I was so busy. I didn't like fully go through it. But like you guys asking these probing questions.
I'm having fun.
Yeah, me too. Thanks for going through that with us. So yeah, 10% risk of attrition. We'll plot with you every snapshot.
Yeah. We'll give you a overall score, the changelog score for your business.
Yeah. This is great. I do enjoy this. I think that what I kind of find fascinating is you mentioned the year 2020 and you were at GitHub. And I don't know if you know what year it is, but I do. Okay. It's 2024, just in case you weren't aware. This is four years later and you're this far with DX. Congratulations. Thank you. We've asked you hard questions.
You've shared some insider information that is in this report that only probably you and some others get to see these snapshots.
Yeah.
So kudos to you for being forthcoming on that. We've talked through DXI. We've talked to the core of four. We've asked you a lot of hard questions and you're... You're like only a few years into this and you're this steeped in, I would say, trajection and maturity.
Like you've got a strong team operating at speed, not the highest quality, but you understand where the lack of quality is and why it's okay to have that. And you have a pretty good foundation and some assurance personally, it seems, on how to take action when you need to take action. That's a great place to be in.
Yeah, hopefully. I mean, the way I look at this as a startup is, can we try to not end up like GitHub? Right, that's the goal.
Oh, it's the same one.
Around their edge velocities specifically, right? I mean, they were at a point where they weren't shipping and people were leaving because it was so hard to ship. Honestly, quite a few companies we work with are kind of in that position, like these big tech companies that went through that hyper growth and churn and
tons of reorgs and are now confronting, okay, now we can't just keep hiring people, but we need to be shipping faster. Where do we get? What are the levers we can pull? It's kind of like health, right? If you can get ahead of this stuff and not... You know, we have four decades of, you know, poor diet and exercise that hits you in your 50s.
Like, if we can kind of stay ahead of it, you know, I would hope that we're able to scale the business while staying P90 on velocity, right? That's the goal here right now.
Would you be like GitHub insofar as they took a $7.5 billion payout? I would take that. I would take that.
Yeah, I would take a few billion dollars.
I was going to say, in that way, I guess it'd be okay. Yeah. Well positioned, I would say. I don't know who would acquire you. Like, who cares about what you care about to the point where you're an acquisition target?
I mean, anyone in the developer business, I would say. So, yeah. GitHub? GitHub, Microsoft, Amazon, Google, right? I mean, anyone in the cloud game cares a lot about, like, benchmarking, assessing maturity. Right. Yeah. Even Salesforce is a little bit. Could you IPO? We could. I don't. What do you want to do? I don't know. I'll do whatever fate has in store for us.
We've pretty much bootstrapped the whole thing. We control our own destiny. We don't have to have an IPO. We don't have to sell for $8 billion. We just want to keep I'm just drawn to this problem. And I think I shared this the last time I was on the show. This all started seven years ago when I first became the CTO at a startup.
And the CEO asked me, hey, Avi, all the other departments here are reporting metrics each month. Can you start reporting some metrics on the productivity of your engineering team? That was seven years ago. And so I joke with people. I'm still just trying to answer that question because I couldn't answer that question. seven, eight years ago. Right. I asked other CTOs, what are you reporting?
And got 20 different answers. And it was a hard problem.
Yeah, for sure. There's a Silicon Valley episode about that. Close to the end of the final season. Kind of funny. And you're a, I think this is the other show we did together, that deep dive. I think that you're the only owner of the business. I think you're the solo founder.
I'm the majority.
Majority.
I have a co-founder, but yeah, I'm the majority owner.
Yeah, I thought it was singular owner. My knowledge is the time between the last time I had this conversation is diffing on me.
No, it might have been my previous business, the Pull Panda. That was the singular owner of that. But no, this one, I've had a business partner uh, since the beginning.
And you've taken a venture capital.
You said bootstrap venture capital. Just, we took a little bit of angel investment at the very beginning from, you know, friends and family. Yeah. Yeah.
But, but we never spent that, you know, when I hear bootstrap, that means that you are, you're gained cashflow positive or at least reinvested. Like if, if not break even, you know, or in the negative, you're in the negative potentially because you're reinvesting, not because you're losing money.
Yeah. Now we're extremely cashflow positive. And I mean, like to the point where my biggest concern each year is how to spend some of that. So we don't pay taxes, corporate, you know, 20% tax rate. Cause that just gets, goes in the chest. So yeah, we, I try to reinvest it. So we don't, it's better to reinvest it than pay, lose 20%.
Do you do much advertising?
Yeah.
I mean, we do.
There's lots of opportunity, let's just say. Lots of opportunity. Let's take that offline, Adam. Yeah.
I'm only kidding with you. Our listeners want to hear about how we get new sponsors and new partners. But they kind of do. No, they want to hear that. Yeah. Yeah.
they'll eventually hear we won't let them though no yeah this is fun i've been digging into this i think that we want developers to be more happy obviously i think the question to me is how what makes developers more happy i think productivity is obviously one key metric and maybe some secondary metrics could be what i don't know just uh happiness in life potentially other things that influence happiness perks pay well you know free beer and ping pong
definitely for a certain demographic RTO no right I do think that that's my new tagline Jared not rug pull not cool it's RTO no it's a good one I was gonna say I just was gonna say I do think that like freedom to live your life in a way that suits you and still work is a huge driver for a lot of people more more than money probably right up there with like productivity and enjoyment of my work is like do I also get to live my life in a way that suits me
but that's maybe just me projecting because it's always been my primary driver even more so than money is freedom and I've very much enjoyed it for many years so I'm appreciative that I have it so maybe I over emphasize that but I'm sure there's a survey out there that answers that somewhere the stack overflow or next year's DX what are you guys going to call this thing your public thing probably state of state of DX DX or developer productivity we kind of use those terms interchangeably right
Careful now, because there is a state of organization run by a friend of ours who does state of JS, state of HTML, state of CSS. And they have this whole platform called state of. Yeah.
We could just call it developer experience index.
You could also team up with them and have them help you run it or something, and then I'm sure they'd be happy to, because they did create, although you have all your own software, so maybe it would be a square peg, a round hole. But I know they have opened it up, and Google runs the state of HTML survey with them.
So if you wanted to really use state of, I think there's definitely opportunity there. Anyways, in the weeds again. Let's call it a show. What do you think, Adam? Yeah, I'm down. Obby, thanks, man.
It's been fun, Obby. Thanks for joining us, man. been cool yeah thanks for the invite all right bye friends bye friends so rto no are you an rto person are you being forced back to the office say no i'm just kidding maybe you can't say no it's a hard thing because now we're in this world where we were once given this hey work anywhere hey be remote hey do whatever freedom
And the kind of jobs that we do in tech generally are jobs we can do remotely. We can do pretty much from anywhere. We can be nomadic and tour the world and have fun and enjoy our life or optimize for where we want to be in our life. And that makes us happy. But RTO is a thing. I say RTO no. If I had to RTO and I couldn't say no. man, I'd be pretty sad. And if that's you, I feel for you.
There's a place you can hang though. It's called changelog community. We have a full blown, fully open, no imposters. Everyone is welcome. Zulip instance or replacing Slack with Zulip. You can go to changelog.com slash community, sign up. Everyone is welcome. Come in there, hang out with us and call it your home. Hang your hat and you are welcome. And I want to see you there.
I also want to see you at All Things Open. So speaking of RTO, ATO. But that's a good one. Allthingsopen.org. We love this conference. We go every single year. We will be at booth 66 right by the ballroom. You will see us there podcasting with everyone we possibly can. Come by and say hi. Hang out with us. High fives, handshakes, and as you know, the occasional hug if necessary.
And we can give you potentially a free ticket. Come hang out in Zulip. Or we can give you at least a 20% discount. That's available to everyone. Use the code MEDIACHANGELOG20. Details are in the show notes. CamelCase, Media, and Changelog. And then add 20 at the end. No spaces. There you go. The link is in the show notes. Follow that. That's the best thing. And I want to see you there. Okay.
Plus plus subscribers. We've got a bonus for you on this episode. If you are not a plus plus subscriber, go to changelog.com slash plus plus. It's better. OMG, it is better. I can't tell you why. You just have to find out for yourself. Go to changelog.com slash plus plus. Drop the ads. Get closer to that cool changelog metal. Get bonus content like today. Free stickers mailed directly to you.
And the warm and fuzzies. Who doesn't want warm and fuzzies? I know I do. I like those things. Okay, thanks to our sponsors, Sentry, Fly, Coder, Unblocked. Wow. A lineup of awesome sponsors, Sentry.io, Fly.io, Coder.com, and GetUnblocked.com. They love us. Go give them some love. And that supports us. And I appreciate that. Okay, BMC, thanks for those beats. You are awesome. That is it.
Friends is over. We're back again on Monday.