
Love Rust? Us too. One of its great strengths is its ecosystem of crates. Rain, Eliza, and Steve from the Oxide team join Bryan and Adam to talk about the crates we love.In addition to Bryan Cantrill and Adam Leventhal, we were joined by Rain Paharia, Eliza Weisman, and Steve Klabnik.Some of the topics we hit on, in the order that we hit them:prettypleasewinnowBlessed.rs crate listAdam's codegen templatemietteeliza_errorserde_path_to_errorratatuiRatatui episode on January 27th!modular-bitfieldlexoptloomOxF: Software VerificationpaloozaCDSCHECKER: Checking Concurrent Data Structures Written with C/C++ AtomicsThe Postcard Wire FormatpostcardBBQueue Explained [video]petgraphU2MatrixGraph in petgraph::matrix_graphWhat does ## (double hash) do in a preprocessor directive? - Stack Overflowsamitbasu/rhdl: A Hardware Description Language based on the Rust Programming LanguagehttpmockcaminoOxF: The episode formerly known as ℔OxF: Dijkstra's Tweetstorm - YouTubeevmapbuf-listIf we got something wrong or missed something, please file a PR! Our next show will likely be on Monday at 5p Pacific Time on our Discord server; stay tuned to our Mastodon feeds for details, or subscribe to this calendar. We'd love to have you join us, as we always love to hear from new speakers!
now you can say something funny yes now we can okay thank god yes uh you know actually it was what a great episode last week by the way that was that was uh it was tons of fun with the predictions episode it was great and uh a lot of people seem to enjoy it so that was great i'll tell you the one person who did not enjoy it lives with me because it went two hours
Yeah, the predictions episode is always going to be long, though.
I'll just tell her that. That's fine.
Yeah, exactly. You know, it's like, listen, like once a year, they're just going to go a little long. I'll try that.
Yeah, you better try it. But no, it was a great one. And great job getting Simon and Mike on. Those guys are delightful.
So I did see a headline on Thursday that I'm like, oh my God, this prediction is wrong already. Intel CEO search heats up as leadership shakeup drives turnaround hopes. And so I'm literally like, this story is going to be like a rumor about a candidate that they're going to be announcing in like, this is going to be done by the weekend. But then I go to the saw, I'm thinking like, oh, great.
Like, well, fine. I knew the risks. The story is that Citi analyst Christopher Dainley says Intel might name a permanent CEO in the next few months. I mean, that's talk about like a prediction. I'm like, wow. How do you get to be an analyst? That sounds hard. Yeah. Conversely, they might not. That's the alternative. Yeah. Sorry.
I mean, it's taking away from the superlative work of Citi analyst Christopher Dainley, who I'm sure does very thoughtful research. But it is not a great insight that they might name a permanent CEO the next couple of months. Yeah. But Adam, I decided that I'm going to take zero. I will take no credit if they name this in January. But if it's February, I will take one 12th credit.
And if it's March, two 12th credit, one 6th credit, and so on. That would be a nice little way of, you know, because I know we keep score so closely.
Yes, some of us do.
And then the other thing that I just want to mention to you because so I got a high school senior. He's got a terrific English teacher. A shout out to Ms. Foster. And she has given him a very good assignment. She's given the class a very good assignment, which is you need to seek out an adult and you need to ask that adult for three books that had an impact on them as an adult.
They can't be kids books. but books written for adults. And then you, student, will read one of these books. And I, the teacher, I, Ms. Foster, will decide which one of these you read. And Alexander picked me, my 17-year-old, which is great. I think my wife is a little hurt, but, you know, we've got other kids. You know, you can focus your efforts on them.
I bet he asked for both. I bet he told both of you that he was only asking you because he's a pro.
Um, I think so. He, he is, I agree with you. He is a pro to the point where I think he might've concocted that story. If he had such a story, he would concoct it only for my wife's benefit. I think he and I both know that I was, I actually was his first call. I really do appreciate it. Um, and so, uh, the, the, the three books that I named, cause I know I'm like, this is a great question.
What are the three books that are, that have had an impact on me professionally have changed me. Um, and, or changed me in adulthood, you have read all three. And I would, with the advice of don't overthink it, I was wondering how many of them you could rattle off of the three. I think you can do kind of three for three.
How many of the three books that you and I have both read have most influenced you as an adult? That's right.
I feel that you, I mean, you can, I think you can go at least two for three. Again, don't overthink it.
I'm like my – I'm like reeling. I feel like this is a quiz I did not prepare for.
Listen, it's a pop quiz. Okay, everybody. Take out your paper.
No, I feel like the – Okay, can I have a hint? Can I phone a friend on this one? So are we talking about like in the – Technology domain. Is that what you're saying?
Okay. Actually our shared professional lives.
Okay. I'm going to say Steve Jobs in the next big thing.
That is one. And I think you, you, I'm going to give you the hint that I think you've gotten the hardest one. The other two are easier than that.
Holy shit. I mean, I don't think this is right, but I feel like the quantum dot was what you recommend.
That's a deep poll, and no, that's not right. That is a very deep poll. Okay, so what was the title of my blog entry when you launched the company? Oxide.
Oh, right. Soul of the New Machine.
Right, of course. Soul of the New Machine. Yeah. Yeah.
And then what did we name our group after at Sun? There you go. Ben Rich's Skunk Works.
All right. You're right.
It was easy. You're right.
I got the hardest one first. Quantum Dot is definitely an example of overthinking it. God, that's a very deep pull, by the way. That's an obscure book.
I don't know why that's like, that's the only thing, literally the only book I can think of in that moment. It was like, I'd name another book. I couldn't, I couldn't name another book.
Your brain just went white and all I could think of was this relatively obscure book published in 1998 or whatever it was. That's all that was left.
Yeah.
Well, yeah, you know, I tried to prevent you from overthinking it, but I, you know, sometimes, sometimes the overthinking is going to happen on its own. Anyway, I thought that was a great assignment. So I thought it was, and I will be, he'll be reading Skunk Works by Ben Rich. Oh, nice. That's great. Yeah, it's going to be fun. So, Rain, Eliza, welcome.
Sorry, welcome to us talking about the three books that have influenced us professionally. Very excited about this. And a shout out to Chris Krakow because I, you know, he had a tweet of like, oh, you know, the crazy you should know, it's great to see a resurrection of this because I did this on the new Restation podcast. And I, that must have been lodged somewhere in my subconscious. But yeah.
Did you ever listen to the new illustration, Adam?
I have, but not that episode. I didn't realize that we were stealing from another great artist.
That's right. Exactly. But very, very excited for this. And because I also feel like, I mean, and this is true of not just Rain and Eliza, but I feel like Rain and Eliza are two that definitely are constantly pulling out crates that I've never heard of that are extremely useful. And I'm wondering, like, why haven't I heard of them? Should we start with, how do we want to do this, Adam?
Because do we, online you implied that we should have a D. Tolney cap.
Well, I just mean, I look at my list and David is here in the audience. I assume just to bask in my fanboying for him. But, you know, I look at my list. There's so many crates that David Tolney has made that I appreciate. And I'm going to kick it off just with one of those because one of the ones that I stumbled on, you know, I write a bunch of macros here and there.
I am frustrated with Rust format. Brian, not in the way that you're frustrated with Rust format, but I want to use it in kind of a library context, and that's challenging to do. Oh, what do you mean? Go on. That's interesting. Well, so I stumbled. I was like, surely David, writer of all macros, has done something for this. And yeah, so there's a crate called Pretty Please.
And it is for doing rust formatting, like formatting of code. And what I really love about it is it's kind of tersely opinionated. That is to say, look, I'm not trying to be rust format. I'm just trying to make things better, like pretty. Like I'm pretty printing the thing. I'm not formatting the thing. And if you don't like it, well, you know, get out of here.
And there are a bunch of like, I think there have been some PRs in issues of the form. Like, could you do it a little bit differently? And I really appreciate that David's kind of like, no, you take it or leave it. If you don't like the way it's formatted, then maybe format it differently. That's fine. But it has been a godsend for a lot of the testing that I've done for these cogeneration crates.
Okay, that's interesting. Because you've got a lot of crates that generate a huge amount of code. Yeah.
So for example, progenitor, you have like three lines of macro that it poop out like 60,000 lines of code.
And you want the code that it emits, you want to be readable.
Well, in particular, when I'm dumping that into a file for test automation or whatever, yeah, I want it to be at least vaguely readable. And I've used Rust format in the past, but there's a bunch of challenges associated with using Rust format in a programmatic context like that. And Pretty Please has been fantastic, exactly what I needed.
you know, maybe I should be using this. Okay. This is, this is already paying dividends actually. Cause I, I, I've got in my crates that generate code. I have just, I've kind of manually made the code that I generate rust format. Oh yeah. Pretty please.
It's going to help you. Yeah. Pretty please. I think exactly what you want in that. Like you mean you, your code generation is like emitting new lines and stuff like that. Yes, that's right.
It's doing all of that.
Yeah.
And, and, You know, that's interesting because it has made the – I mean, of course. I kind of believe that code that generates other code, there's like a balance that must be achieved in the universe. And if the code that you're going to emit is going to look clean, the code that emits that code has to be filthy. But maybe that's too –
Well, this is one of the... My code that emits Rust format clean code is filthy. And this, I think, would allow me to clean it up quite a bit.
Totally. This is what I've fallen in love with, with regard to like Rust macros, which is you can use another David Crate, the quasi-quoting system. So you quote code that looks like Rust code.
and then pretty please we'll just clean it up so you don't have to like just live in this cave person era of like strings on strings and doing your own semi-formatting here and there and the beautiful thing too is like your code generation in macro context can be exactly the same code that if you want to generate code and dump it into files and i think it just allows for
really, like, debuggable, testable, understandable code, as opposed to, as you're saying, Brian, like, kind of this swirly code generation that is also interspersed with formatting.
Brian, you might be pleased to know that I got out in front of this a few months ago and ripped out a bunch of string-based code generation from Idle, which may have been Cliff's doing rather than yours. But now that uses quote and pretty please.
Oh, that's interesting. Yeah, that is Cliff's doing, not mine, but I can go look at that as a model. What I'm thinking of in particular is the PM bus crate, which is just... There's some grime that could be cleaned up in there, for sure. Yeah, this looks great. God, you know, there's always...
You know, they always, as I think we've said before, you know, they always say that there's a chat that includes everyone except for you. I always feel like there's always a detail in that crate you haven't heard of. And I... That's what I'm saying.
I mean, that was the tweet, right? Like, I feel like David has done so much stuff in this kind of domain, too, that surely David has found this problem. In fact, I'm going to cast this open to David and to Rain, who's bumped into this. You know, one of the things that I struggle with Rain...
is a problem I saw you working on in Dropshot, which is one of the things that Sin, another different totally create, does very nicely is like turning errors in the Rust macro context into code generated errors to help debug and stuff. And one of the things I saw you do in Dropshot was like, collect a pile of errors to then emit all at once.
And I'm sort of surprised that there wasn't something you reached for to say, as you encounter problems and errors along the way, accumulate this list so that you're not just failing on the first problem, but actually emitting a bunch of errors for the user to then handle all at once.
Yeah. Is there anything like that? So I spent, it's funny because after we talked about it last time, I ended up spending a little while looking at it. And there are a bunch of great libraries and actually some of them that I wanted to talk about here. There wasn't quite anything that I noticed kind of hit that exact spot.
And partly because I think one of the things that kind of becomes challenging is that you're If you want to do good error handling, and this kind of goes into the cosmic balance thing that you were talking about, Brian, if you want to do good error handling, often you can no longer use good type system things.
So as an example, one of the ways you might model something in Rust is with a result of the OK value or the error value, right? But if you want to like collect errors, then often something you will do is you'll pass in like an ampersand mute error collector or something like that. And the value that you returned like is an option.
And now you have to know that if, you know, there's kind of this implicit invariant here that if there is an option, then that means that you had at least one error go in and so on. A crate that I did actually want to call out, though, and something that doesn't quite solve this specific problem, even though I wish it would, is Miette. So Miette is a really, really cool crate.
It is kind of... So if you're familiar with Rust and, of course, DTolney's crateverse, you'll have come across this error and Anyhow, right? Miat is kind of a combination of this error and Anyhow, and it kind of meets both of those things. But another crate that it actually meets is Codespan. So,
If you're familiar, one of the things that's really interesting about Rust is that Rust C has great error messages. And I think that's one of the reasons that all of us feel pretty good about Rust, right? Is that fair to say?
And with the error messages, I think one of the things that's really nice is there's this lovely syntax highlighting where it'll show you the exact things that were wrong and it will give you a suggestion of what to do instead and all of those things. Amazing. Yeah, it's so good. And so there's actually a few crates that do that.
So the Rust C's own error thing is extracted out into a crate that I don't remember the name of off the top of my head. Then there's another crate called ColdSpan, but Miette also captures all of that. And Miette actually, one of the things it can do is it can store a list of errors.
So what you can do is, and I have used this pattern in some places, is that you actually store a list of errors and then you have Miette report that with, so you can provide the source code that those errors associated with and the byte offset. And so you kind of provide that source code and then Viet will kind of render that in a nice way.
So it's, I think that kind of style of like high quality error reporting is actually something that is really, really cool about Rust. And I don't know if there's any other ecosystem that has paid this much attention to like how your error messages look, right? Rather than just reporting like your line number or whatever.
Yeah, this looks really cool. This looks so good. Yeah, I've not seen this. Have you seen this before, Adam? I've never seen this before.
No, never. Actually, you know what? Rain may have pointed me to this a while ago. But Rain, just to be clear, this is not in macro context. When you say it kind of draws inspiration from Rust C, it's not like it's for if you're processing some other kind of document or whatever and you want to draw on that kind of concept. Okay, cool.
Yes. Yes, it is. Right. So as an example, actually, one of the examples is that you can integrate it with SOD JSON. So you can actually get great highlighting for which bit of a SOD JSON thing failed. And I think that's really cool.
Oh, that is really good. Yeah, that is awesome. Because, you know, I really like Ron a lot. Ron, not the humans named, all the humans named Ron, although I guess I like all of you too. But the Ron, the Rust object notation, I like a lot. But man, the error handling is, the error messages are really not very good, which is frustrating. And boy, this would be an opportunity to really improve them.
Brian, it is with a heavy heart that I must inform you that if you have, any time in the last six months or so, made a typo in an idle RON file and gotten an error that has some RON source in it, that's thanks to Miette.
Okay, so you've integrated Miette into the RON parsing in idle? That is correct. Okay, I've got to do the same thing for you guys. yeah, okay, I need to do the same thing. This is really, but this has already paid enormous dividends and we're only like 20 minutes in or whatever. Do you want to sound a little less surprised? 20 minutes, like seven minutes was us screwing around.
I mean, this is amazing.
I think in general, though, there is some value in having, you know, the kind of thing I wrote, which is like, you have like a nested tree structure, right? And you are parsing through the nested tree structure.
And you want to actually not just fail on the first error globally, but you have kind of a notion of like, you want to go through as much as possible and you want to collect as many errors as possible. And I have had to do that a few times, and I've pretty much handwritten something each time.
So that kind of suggests that maybe, I don't know if the audience has a suggestion for something that kind of does that. Otherwise, that might actually be worth doing and kind of putting out as a separate thing. Yeah, it is kind of a much bigger scope thing, but yeah.
This is where Detone tells us to reach under our chairs, and we've all got that crate sitting right there. Exactly, I know.
I have a pair of really, really cool SerD-related crates. So there is a crate called SerD-Ignored, and there is another crate called SerD-PathToError. So I think both of these are really good. And again, kind of coming at it from the you want to produce good error messages kind of thing, right?
So one of the things that I've noticed when defining, say, a configuration file is that people will often misspell things, right? Surdy has this really cool deny unknown fields feature. Yeah, I love this. Right? So, and deny unknown fields is great, but sometimes you don't want an error, you instead want a warning. And Surdy ignored actually kind of lets you get that warning.
So it's kind of somewhere in the middle between like the silently accepting the, you know, maybe the typo. or failing, and I really like Serdy Ignored for that because often you want to support some kind of forward compatibility, and if you have that forward compatibility, then you don't just want to choke if you see a new option or whatever, right?
And so Serdy Ignored does a really, really good job of reporting that. And then kind of paired with that, but kind of solving a slightly different problem, is certipath2error. And what certipath2error does is it will try and report the nearest part of kind of what failed. So it'll kind of maintain some state and like, you know, which keys have you traversed into and so on.
And it does a pretty good job of that. So, you know, there's one specific asterisk which we don't really want to kind of get into right now because it detracts. But overall, like this pair of crates has just kind of been, I feel like this has like really elevated the, you know, like kind of error handling experience around configuration files for me.
Yeah, this is great. I've not used actually either of these. And again, we're, you know, but Adam, I'm also standing by my belief that we should not have any bag limit on Detonate credits.
No, I think spot on. Maybe like 50 or 100 or something. But yeah, path to error, we've definitely used. I mean, I'm sure you've seen those JSON errors that are like, yeah, no, I failed to parse. Byte 6015, is that helpful?
No?
Okay.
Yes. Yeah, I mean, it's not helpful at all. It's very, very unhelpful. The opposite of helpful.
That's right. In fact, so unhelpful because you're like, maybe I could figure out the 615th byte. Like, that wouldn't be the hardest thing in the world, would it?
I have done that. That has been my immediate go-to, sadly, as opposed to being like, you deserve a better error message. We deserve nice things. It's a long path to really adjust to that.
Those are great. What else is on your list, Rainn?
Oh, boy. Um, uh, that was actually, uh, as far as, uh, I mean, I had a couple of other detail nuggets, but they'll come up there. Um, another crate that I wanted to call out, uh, and kind of, you know, as a cool proc macro crate is, uh, derive where, um, so, um, so one of the things that, you know, kind of people run into sometimes is that, uh, you want to do say like a derived debug, right.
Um, or like a derived clone or something. And like, uh, if you have a. create which has a generic, like t colon clone, then the implementation that Rust generates is you only derive clone if t also implements clone. You only get to implement clone that way. That is like mostly what you want, but sometimes not. And so, you know, one option is you, if you don't want that clone bound, right?
Like sometimes you're not actually storing a T in there. You are storing, say, some kind of derivative type of T. then you do that. But the one I really like that kind of automates this is derive where. So what derive where lets you do is it lets you say derive clone where sum bound. So you can say derive t clone, sorry, derive
my type clone where T is, you know, like some implementation, some implementation of some trait that you've defined, but you can also save like, you know, you don't have any restrictions on T. So you can just do derive where clone. I remember like showing something else at a demo day and then everyone else was like, what's this derive where thing?
And then that ended up joining into the Driveware demo. It was very funny. But it's a create that I really like, and I end up reaching for it a few times a year. How does this work? As far as I can tell, it just generates a proc macro that iterates over the fields. But it just puts bounds on them. So it's just a proc macro in that sense.
That's really cool.
Yeah. Definitely one of my favorite little proc macros that help out.
Yeah, and how were you using it in the thing that you were demoing, right? What was the specific use case where you were using it?
I think what I ended up having was I was storing a phantom data of D or something like that. So I was storing like... So I had this thing which only stored the T as a marker, so it didn't actually store any concrete values of T. And implementing something like clone for that should not require that T implement clone, right? Just logically, that is not a requirement.
So I ended up reaching for derive where with that. That's a pretty common thing I end up having to do.
That's right, Nate.
It seems very useful. How do you discover that crate? This is a good question. How did it occur to you to go look for a crate that does that?
Um, I think, uh, I think what I ended up doing, God, this is telepathic crate powers.
I mean, you can, you don't have to tell us if you don't want to, but I, this is, I, this is, these are, there aren't these crates like these rainworm. Like how does rain know about this?
Okay. So rain, while you think of the answer, I would just say, have you ever done this brain? I go to chat GPT and I think like, yeah, surely there is a crate for this. And I describe the crate that I want. And I,
a thousand percent it's like all you have to do cargo ad smorgasbord then smorgasbord you know colon colon whatever and it'll do exactly what you want i'm like thank you so much gen gbt like crate doesn't exist or it just exists or it exists and it does something totally unrelated to it but i've never got it i've never had it say sorry like nope nothing
I do think it's funny that ChatGPT is most likely to hallucinate, in my experience, when you yourself think that this thing should exist. You know what I mean? Or ChatGPT is like, no, there should be totally a great like that. In fact, there is. It's smorgasbord.
And on the one hand, it's like somewhat vindicating that it's like, because there are, you know, I went to, ChatGPT had a very vivid hallucination around organizational gists So organizational, do you use gists in GitHub at all?
Sure, yeah.
And organizational gists exist, but I can't see how to create them or edit them. And ChatGPT also believes that you should be able to do the things that I believe you should be able to go do to create or edit them. And it's like, no, you can't do any of those things, actually. It was, you know what it reminds me of? The organizational gists are like inside UFO 5440.
Have we talked about inside UFO 5440 here? No. Maybe not this year, but like several times. Right, exactly. Maybe not in the last six weeks. I don't know. Inside UFO 5440 was a Choose Your Own Adventure, which, okay, fine. Choose Your Own Adventures were, you had read Choose Your Own Adventures. Yes, yes. But I think, but Rainn, you and like Eliza, you'd not.
Steve, did you read Choose Your Own Adventures? Oh, excuse me.
I have read Choose Your Own Adventures.
This is great to hear. This is great to hear. Eliza, so this is good. Choose Your Own Adventures have left a generational chasm. This is a relief. I'm glad.
I used to like hold my finger at a place when I wasn't sure what decision I wanted to make and then like choose one and be like, nah, nevermind. And like back up one. And then like, I started getting a stack of like, I can't hold three of my fingers in three different places in this book at the same time.
This is amazing that Eliza, you also read Choose Your Own Adventure. I am just old enough to remember save scumming in a book. Yeah. Okay.
Well, so there was, okay, well, great. This is great. This is something that really can bring all generations together. There was one of the early choose your own adventures was inside UFO 5440 and inside UFO 5440 in every ending you died, except for one ending where you, you, you got to utopia and, but there was no page that actually sent you, there was no path that sent you to utopia.
And I just remember my little nine-year-old brain just absolutely smoldering on this and not being able to. And of course, in hindsight, as an adult going back and reading Inside UFO 5440, in addition to the warning of like, you know, don't read this book straight from cover to cover, blah, blah, blah.
There is a special warning that this book, this choose your own adventure may require you to think in a very unorthodox way, which was basically their way of saying that you needed to go straight. Anyway, it was way too meta. Yeah.
also as an adult i'm sure you appreciate the like maybe this will keep this kid occupied for an hour and a half uh i definitely appreciate that and it uh may have overshot the mark because like i'm pretty sure my brain like just absolutely seized up on it i i don't i mean clearly i'm talking about i mean i'm i'm a 51 year old man talking about this choose your own adventure so like i think it's
I think you can safely say that it overshot the mark. How did you not reference this to Alexander when he was asking for a book that influenced you most significantly? Like, like fish skunk works.
Like give me a break. Oh my God. Why was not inside UFO 5440? I mean, they asked for three, not four. Okay. That's why they, obviously they asked for four. That's that. I mean, clearly has influenced me a great deal. Where were we? I'm so sorry. I feel like I've done the thing where now we've ended up on the ending that you can't have any way of getting. I'm so sorry.
I think, you know, I was thinking about how I'd do that, and I feel like there's no magic, right? In this case, I ended up Googling for, I think, Rust-derived custom trait bounds or something, and derived where is in the first page of results. But, you know...
I'll spend a little time looking on Google and Create.io, and I'll also maybe ask some people who are there, like, I know, do you know something? And then sometimes I can't find it. It's just, it's really hard. I think one of the ways, one of the more, I think, structured ways that has helped is like,
Like if you have a particular code base you like, which you feel like might use something like that, then kind of dig around in that code base source code. I think that is kind of, you know, that, that feels like a good way. And I've discovered a whole bunch of crates that way.
Yeah, and it's funny, Raina, I think that you mentioned that because I kind of feel like this has been this glorious positive feedback loop of the Rust ecosystem getting larger, is that there are more kind of programs you can go to to say, how did this thing do this? I mean, certainly that's how I discovered 2ERS, now Ratatouille, was from using something that used that. I'm like, this is amazing.
What did this thing actually use to do this? And I guess, Adam, is now a good time to say the teaser for our Ratatouille episode coming up in two weeks, right? Yeah.
Orhun, who is a maintainer for Ratatouille, met over at Rust Lab in Florence, Italy, which I've mentioned a couple times on the pod. And he's going to be joining us in two weeks at 9 a.m. Pacific, which is some other time in Turkey. So, yeah, really excited to be talking about Ratatouille with him.
It'd be great, but I definitely found that from doing exactly that right, of going to looking at other programs and seeing what they were using.
On the subject of finding crates, I was thinking a bit this afternoon about, you know, I have personally had a lot of experiences with, there are many categories of crate where... There are a bunch of implementations of basically the same thing with basically the same API and wildly different performance characteristics. My classic example is like async channels like MPSCs.
There's the Tokyo one, there's the Futures one, there are a bunch of crates that are just channels. And having done a bunch of benchmarking and digging into their implementations a few years back when I was writing my own channel, they all have very different performance characteristics and they're not really, there isn't a good one.
You know, you don't just pick the good one, you pick the right one for the job. And it depends pretty substantially on the usage pattern. And I think kind of the big lesson that I learned from that is that as a crate author,
I found it very useful to sort of up front, like at the very top of the readme, front and center when you've gone and done something that there are, you know, four different versions of on crates.io, like a channel. Why should you use this? Why should you not use it? How does it compare with other crates that maybe implement something similar? And I found it really useful to do that.
And I hope that when people go searching for these libraries and encounter something that I've written, they read this section and know when, you know, maybe this is actually not the appropriate implementation for the specific problem they're trying to solve or when it is. And I want to kind of encourage others to think about doing this because I think it's a really valuable exercise.
I can maybe scare up some of the examples of times I've written that.
Yeah, for sure. And is now, because I know at some point we're going to talk about bit field crates. Is now a good time to talk about that? Because that is an example where there are a bunch of different crates. I know you did one as well.
Well, so my Bitfield crate, actually, I know I promised you some Bitfield crate opinions. My Bitfield crate has such a section in its readme, and it basically leads with there's no reason you should choose this. I wrote it for fun because I wanted to write it for fun. And the one interesting thing about it relative to other Bitfield crates is that mine is a declarative macro rather than...
I'm a big fan of the bit field crate called modular bit field, which allows you to sort of have a struct and annotate various fields in the struct with attributes and you generate this very nice, you know, packed within one word bit field thing. And I think that it presents kind of the nicest interface for doing this.
And mine is just kind of worse in every possible way, except that it doesn't use a procedural macro, because I thought that it would be fun to see if I could get it to work without using a procedural macro.
Okay, so I'm, you know, I've not used module or bit field and this is not so, like this, bit fields are a really good example of something where it can be hard to find. I mean, there are a bunch of different crates. They're all like named bit field or have like bit field or bit fields or they've got bit field in the name somewhere and it can be, hard to sort out what's what.
So I did not, I'd never discovered modular bit field in this, but this is actually much closer to the interface that I had kind of wanted to find in other bit field crates. This feels like a really, a much, a much more kind of natural interface.
Yeah, I think it's really nice. It's by far the nicest interface to this sort of thing that I've seen. And I just pasted in chat the comparison with other crates section for my sort of Cgeneris bit field crate that says basically don't use this, use modular bit. But I've used my own thing in all of my projects because I wanted to make my own things. I thought it would be fun.
Yeah, this is actually a great guideline in terms of comparisons with other crates. I also think it's something that I like about the Rust ecosystem, too, is that it doesn't feel like it's a popularity contest. People are just trying to find the right tool for the job.
And it's like, hey, if my crate is not the right fit for you, let me actually go stir you to the other crates that may be a better fit for you.
I mean, to the contrary of being a popularity contest, I mean, how many crates have you gone to? that have a really thoughtful, like see these other crates section. And I really appreciate it because not that they're apologizing for existing, but explaining like, no, this is not NIH.
Like I built this because I looked at these other things and I evaluated them and they didn't meet my needs, but they probably do meet your needs. So like, don't use my thing just because I built it. Like go use the thing that meets your needs. It's incredibly thoughtful.
It's also, it's very nice as a maintainer to not have to like respond to the issues of people who are just constantly showing up to say, can you make this other crate? Well, no, I can't because I didn't set out to do that. And, you know, not to sound too much like a member of a cult, but Brian, this is why it's important to have upfront values for one's technical projects. I, yes.
Uh, in terms of being very explicit, I do think it's very helpful to be explicit about the things I care about and the things I don't care about or, and cause I think it's, it's okay that the things that, that matter to a crates author don't need, I mean, just what you're saying, Eliza, about being upfront about, you know, maybe it's, uh, maybe performance is a primary consideration.
Maybe it's a secondary consideration, but being explicit about that is actually, um, really, really helpful.
Yeah. Yeah. And the big lesson from the channel thing that I learned from doing a bunch of benchmarking of channels is that there isn't really just a box that says performance on it that you can click, right? Because, for instance, in the readme to my channel crate, which I posted in Discord chat, I discuss, you know, there's this question of like,
An MPSC channel, an async channel, has to store the messages in the queue someplace.
And there are channel implementations that will allocate and deallocate chunks of buffer as you are sending and receiving messages so that the memory usage of the channel is proportional only to the number of messages currently in the queue versus channel crates that will allocate the whole buffer once when you make the channel. Yeah.
There isn't necessarily one of those is not good and the other is bad. It's a question of is this channel like something that is structurally integral to the program and it lives for the entire time that the program exists and all of the messages go in that channel. And is the bound on that channel like extremely large and the message very big?
And that means that if you keep it fully allocated all the time, that's like a very large amount of memory. Or is it something where you are making these channels and you're having a bound of like eight or 10 messages and now there's just extra overhead of doing this? Like I'm going to allocate another chunk of buffer as I need it.
And it really depends pretty substantially on the usage patterns, and there is no sort of the right move. So it's useful to document those sort of performance trade-offs and the ways in which this might be suitable for one type of use but not for another.
Yeah, totally. And I think that the Rust ecosystem tends to be pretty good about talking explicitly about what those trade-offs are. But it's certainly excellent advice to create authors to be very explicit about those trade-offs.
I think a place where that's kind of organically arose is... is with CLI parsing crates, because there's a whole bunch of CLI parsing crates. So some things like clap, which many, if you've written a Rust CLI tool, you've almost certainly come across clap. But there's a whole bunch of other points in this design space that people have hit with various trade-offs.
And I was really appreciative of people really put together benchmarks for like, you're considering things like, how long a build takes and like how many bytes get added to the final binary, right? Versus like error handling and so on. And I think, you know, different projects can reasonably make different trade-offs here.
And one of the things I, this table was like when I saw this table and when I saw like, you know, the amount of work put into it, it was just very, very impressive to me.
And so you've dropped the arg parse Rosetta RS repo into the chat that is looking at the benchmarking all of the argument parsing libraries, which I think is really interesting. Truthfully, I've been, although actually now looking at the times, it's kind of interesting to look at the build time versus the overhead. Because I have used Clap for most things.
And I've been, especially when Clap merged with StructOpt, I've found it to be really pretty terrific. But I also think it's great that there are other approaches out there. It's not the only one out there.
Yeah, and in particular, like, I mean, clap has a couple different ways to use it. You can use it with or without the proc macro, but then there's a bunch of others. So actually another one that I really like that is much lower level than clap is lexopt. So the goal of lexopt is like all it gives you is an iterator over the options, right?
So you're getting an iterator, and in the iterator, you get a little bit of structure. So you get whether it's a single dash or a double dash. So you get very, very basic things like that. And some, if you really want that low level of control, then lexopt is great.
But the trade-off there is that you need to write your help yourself, and you need to remember that each time you add a thing, you also need to add the help for that. And maybe the error messages aren't as good and so on. So these are the kinds of things that... that you have to consider. So, you know, I recommend clap as the thing to go to, right, if you want to start.
But these are all things that are, you know, worth considering for things like embedded binaries and so on.
I want to point out that Lexopt has in its readme a very nice why and why not section in which it says basically everything the range has told us.
Yeah, interesting.
Just looking at LexOps, read me now. And it wants to be small, correct, pedantic, imperative, minimalist, unhelpful. I feel like this is a description of many of us. That's right. Look in the mirror. You may be looking at Lexoft. You may be looking at Lexoft. Exactly. And it also means it's making less decisions for you.
I got to say, the thing that drives me that is annoying about Clap, and maybe this is something that has been fixed. I should go see if they... And I should probably get an actual issue open on this. But there is, and Steve, I can't remember if we actually opened an issue on this or not with clap, that there is basically no way to have a minus H option.
Like a minus H, it is going to take for help no matter what. Like if you're like, no, no, I don't want that to be help. It's like too bad. Clap is like, it's nope, that's help.
Yeah. I remember that being something, I'm not sure if we did open an issue or not, but yeah.
Um, and, but, but honestly clap is so useful and helpful in so many other regards. I'm like, okay, you know what? I actually really appreciate it, but it's a good example where it's making, it is not, uh, I think it's fair to say not small, um, and not minimalist. Um, and, uh, it is different from X opt.
Um, so I think it's, you know, Eliza, just, you're saying, Oh, what you're saying about like the, it, being very upfront about, about who you are as a crate and kind of what, what the rubric is going to be for the way a crate decides to integrate additional work or not, I think is extremely helpful.
Yeah. Lexop is, Lexop's nice. I like that.
Yeah. It's, it's, it's really cool. I have, I've actually used it in combination with clap. So I was like, you know, there were places where I had clap to the first wall and then, um, And then I wanted a second level of parsing for something more detailed. And then I used LexOpt for that. So ultimately, it takes a bunch of strings. It is a thing that takes up strings and produces output.
So it's a primitive that is generally useful, I think.
Yeah, and I also love the why not under LexOpt too.
So that's pretty great.
What else is on everyone's list?
Well, I do really feel like I will be sad if I don't get the opportunity to plug what I feel is the crate that has had the biggest and most profound impact on my life personally. And that crate is Loom, which is pretty different from everything we've discussed so far. This is a crate that Karl Lerka wrote while he was working on the Tokyo scheduler.
And what Loom is, is a model checker for concurrent Rust programs. And the way that it works is it gives you sort of a set of all of the primitives in standard thread, standard sync atomics, and standard sync mutex, and so on, and a sort of simulated unsafe cell. And the way these things work is that they have basically the same API as the standard library functions,
But rather than actually being like you're spawning a real thread or you're just creating a real single word that you're doing atomic compare and swap operations on, instead what they do is they deterministically simulate all of the potential interleavings of concurrent operations that are permitted by Rust's memory model or the C++ memory model which Rust inherits.
And this is sort of based on a paper, I believe, that describes a sort of model checker like this for C++. And so what you can do is you can have, like, using some conditional compilation, you can say, normally I want to actually spawn threads or use real atomics or what have you, but when I'm running my tests...
I want to be able to write these deterministic models that will exhaustively explore all of the permitted interleaving, like the Rust compiler is allowed to emit or allowed to allow the operative scheduler to emit.
then if you use the loom unsafe cell, it will check like, okay, if I have a immutable access from one of the simulated threads, and then this thread yields, and now I'm executing some other thread, and now there's a mutable access to that same unsafe cell, it will then generate a reasonably nice panic,
And when you do this, you sort of have to sit and run this task for tens of thousands of iterations because this is sort of a combinatorial explosion of potential paths that the model permits through this test that you've written. But the reward for that is that if you've written complex concurrent code like a log-free data structure, you get to learn all of the ways in which you've done it wrong.
Which is, I would say, deeply and profoundly humbling. You learn the ways that, like, perhaps you were executing this code in real life on an x86 machine, and you've never seen any of these possible data races because you're running on an x86 machine. but someday your code might be cross-compiled for ARM, and it just so happens that you've used sufficiently relaxed atomic orderings that...
when compiling for arm you will actually see like loads and stores reordered in ways that will result in this data race that you've never seen in real life and so you've used loom it sounds like to actually debug weight and lock free data structures i have learned used it not to debug weight and lock free data structures so much as to learn that my weight and lock free data structures are wrong
Okay, so I was going to ask. So Loom has found an interleaving, which now has incorrect behavior. Yeah. What happens now? In terms of getting from that interleaving to understanding, were you able to relatively easily get from Loom's discovery of an interleaving to be able to wrap your brain around what had actually happened?
So Loom will log...
it's like it will log um you know i'm doing this operation at this time and it will try to tell you it it's logging is like somewhat useful it will try to use track collar a lot so that it like captures like where was this mutex constructed in the program at what line uh where was this atomic constructed at what line was it accessed at what line was this um this unsafe cell accessed
And which thread did that? Or which simulated thread in this test that you've written? and it will try to sort of give you some helpful information about that.
But honestly, it also is just sort of very useful as a trial and error mechanism that sometimes you just sort of end up going, oh, I think I understand what the problem is, and I'm going to kind of permute the program a little bit, and I'm going to run it through Loom again, and maybe now this model will actually, you know, after running through tens of thousands of iterations, I've actually not found anything that causes a data race.
Or a deadlock, it also does deadlock detection, and it has a leak detection facility similarly. If you also use looms, wrappers around box or other ways of allocating and deallocating, it will tell you it leaked a box or an arc. And again, the thing about this is that it sounds at the surface level similar to tools like
t-san or asan or valgrind but it's actually quite different because it's a model checker rather than a sanitizer that you run your program under and then get back oh while it was executing it did a bad thing but you know it's possible that you'll just never see the bad thing happen during that execution whereas with this sort of deterministic model checking
Of course, there might be bugs in the model checker, or you might have set bounds on how much it can explore the state space. And you might have missed a bug. But if you set aside those things, you know that you've actually deterministically explored everything that the compiler is allowed to generate. So anything that is outside of that is not permitted by the model.
This is a life-changing crate for you, in part because it highlighted the challenges in your own weight and log-free data structures.
Describe a little bit how this comes back to you. Yeah, this stuff is incredibly hard to reason about. And every time you think that you're actually good at it, that's very dangerous, right? Because this stuff is incredibly difficult for us to deterministically explore all of the interleaves permitted by the model in our head.
And so it's just sort of like it has really kneecapped me every time I've used it. And it just sort of taught me about my own insignificance and how small my mind is relative to what is permitted by this extremely complex memory model.
And really, the way that it has impacted me is that I will never write lock-free, wait-free, or even concurrent code that uses locks that is of sufficient complexity without using Loom. And I try very hard to avoid... anyone else's code that has not either been tested using Loom or tested using another similar model checking tool. Because I don't think that human beings unassisted do that.
I think that it's sort of like C versus REST, right? It's sort of like there are plenty of C programs that have run in production and are, you know, thus far we have not seen the lurking memory errors in them. That's great. But This is a way of exhaustively proving the correctness of our programs.
And it has taught me that I don't like this is not me saying, like, y'all don't know what you're doing, because I don't trust myself to do this unassisted either. I think that it is just fundamentally like you will regret it. not using these tools, and you will regret using any library that implements a complex concurrent data structure that is not tested using a tool like this.
I'm not saying, it certainly does not have to be Zoom in particular, but something of this nature is just kind of a necessary tool to write this kind of software. I certainly was there for the gradual sort of push to cover all of Tokyo's internals with Loom. Carl developed this while he was sort of rewriting the scheduler.
And over time, we sort of pushed to get it into more and more of the various synchronization primitives and other Tokyo internals. And we found just a kind of devastating amount of bugs by requiring that any new or changed code have tests. And many of those bugs had not been discovered directly.
But they probably sort of fixed a lot of the weird, inexplainable behaviors that there were GitHub issues that nobody really knew what the answer to was.
yeah, that is wild. And I mean, I see it kind of like chilling when you start seeing all of the, also when you have these issues where you realize like, God, the, the symptoms of this problem would be really far removed from the root cause. It'd be really difficult to debug presumably.
Um, if seen in the wild, it would just kind of be, uh, you would die on some state and the inconsistency and presumably, um, and then try to reason about how the hell he could possibly end up in that state. Um, Yeah, that seems great. And I love the fact... So was Loom done by Carl as part of the work on Tokyo?
I mean, was Loom born out of the need to be able to better understand or validate the Tokyo changes?
Yeah. At... It came out of, I believe, between... Some of you might be old enough to remember Tokyo 1.0, or Tokyo 0.1, where Tokyo was split into Tokyo Core and Tokyo IO and various other crates. And in the sort of...
process of writing Tokyo 0.2 Carl rewrote the entire multi-threaded runtime more or less on his own and in the process of doing that he realized that this was just like extremely difficult and somewhere along the line found the paper I believe the original the paper is called CDS Checker And it describes a very similar thing in C++.
And Carl basically said to himself, I can't keep continuing the scheduler rewrite without this. I have to stop what I'm doing and go and implement it. And I'm sure Carl can recount this story much better than I can. He sort of stopped everything he was doing and went and materialized the thing.
Since then, it has been kind of improved substantially, in particular with regards to actually being able to tell you what went wrong in your program instead of just sort of, well, you did a data race. Good luck. And also its performance has been kind of optimized substantially because we might not generally think like, oh, it's a testing tool. Performance is like very, very important.
But it's a testing tool that will execute a test potentially hundreds of thousands of times. Sometimes you're really sitting there for like an hour waiting for the thing to run one test. So a great deal of perf work was sort of done more recently to try and make it not just mind-numbingly slow. But yeah, that's really, that's its heritage.
That's great. The performance is terrific. It's also just extremely satisfying when you've got the computer just working so hard. I love it when the computers are working. We get to come back in an hour and see what the computer has found in terms of these subtle issues. It's very satisfying. That's great, Eliza.
At a meta level, one last note on just how long it takes for this thing to run. At a meta level, I would add that the length of time that the loom model of a concurrent data structure takes to run is sort of a good warning metric too. If it takes an hour to test this thing, maybe this thing is actually too complicated and you could make it simpler.
Yeah, interesting. and trying to just driving towards something that is simpler. Um, and then, um, uh, someone in the chat asks about postcard. Um, I wish she's postcard in, in humility. We have, uh, humility and hubris use postcard, um, as a, is that a 30 serialization format? Yeah. Yeah. And it, but one that is, uh, pretty tight, um, and pretty straightforward.
So yeah, well, I'm a, I'm a postcard fan for sure. Yeah.
Postcard, if memory serves, is very similar to Hubris's sort of indigenous serialization format, but with a couple of key differences. I think that Cliff skipped the varint.
Everything in Postcard, every integer in Postcard, I believe, is varint.
I don't know about the specific differences, but Cliff definitely looked at Postcard whenever we were doing the Hubris serialization stuff. It definitely took a lot of inspiration from it, but did ultimately decide to design his own thing.
I had a list.
I had on my list another one of postcards at James Munn's thing. And I had one of his other projects on my list of crates, which is BBQ, which is a queue like the data structure. And BBQ is a multi-consumer, multi-producer byte queue that... allocates exclusively in contiguous regions in memory.
And the idea is that this is a queue that you can grab sort of a chunk of bytes of a given size off the front of, and then you can do a DMA directly into that lease and release it to the queue, and then you can wake up the other end. And he's got a bunch of
The interface for it is kind of hairy, but it allows you to say, I want this static region that I've declared as the backing storage for the queue, or I want to be able to dynamically allocate a byte buffer that is the backing storage for the queue so that you can use it really in both embedded projects where you don't have any capacity to do dynamic allocation. You can make them on the stack.
You can make them on the heap. and they're really nice, and they're DMA safe, so you can just have your NIC or whatever write directly into the region and queue that will then be consumed by somebody else. It's quite nice.
It's also based on a paper, I believe, called BitBuffers, and I think that that's kind of an underappreciated crate that I have really enjoyed using.
Yeah, and as someone in the chat points out, has a 90-minute guided tour that is included. Go watch the guided tour of BBQ. But yeah, that looks good.
I have a great idea. So is BetGraph due mainstream to talk about here?
or um no i don't think so i i i'm not sure yeah what is it sorry i'm embarrassed if this is too mainstream like i uh no i don't think so ever heard of youtube brian like that um so uh
So PetGraph is a crate I've had the good fortune to use a few times in my career. And it is a crate that lets you represent graphs, right? So it is a crate that essentially has a bunch of graph data structures, and you can represent your things in there.
One of the things, you know, and I was thinking about why I like PetGraph so much and like, you know, there's some other places where I will like handwrite my own representations rather than using some framework someone has provided. And like, you know, for this, in this case, PetGraph is like, it is a whole framework, right? You kind of model your data, you put it into their data structures.
And for me, I think the distinguishing thing is that PetGraphs gives you a lot of value from that thing. So there is a wealth of graph algorithms that are included in PetGraph. So, you know, there's like two different SCC algorithms. There's a bunch of different like, you know, like min, you know, the max cut min flow stuff. Like there's a lot of really careful handling.
And so, you know, at this point it's like, okay, you know, if I have a graph and one way I could do a graph is like, you know, the simplest way is you can imagine like a,
a node with like a arc of node of children or something right like a or something like that um and i think um you know what kind of what can you end up having to write your own algorithms uh on top of that uh but petcraft just kind of you know you have to do a little bit of work to fit into it but it just gives you all of these algorithms and like there have been times where i have thought that all i want is like a dfs and you know you could probably write a dfs by yourself
But then I realized, oh, you know, in some cases the graphs can have cycles. So I need an algorithm to kind of convert the graph into like what is called a condensation graph, which is the same graph but without cycles. And then, you know, so it kind of gives you all of these things. And I don't know, there's something very satisfying about PetGraph in a way that I really like it.
This seems cool.
Yeah, this is great. I've known about this for a while, unlike some folks. But I've sort of resisted using it just because it felt heavyweight, if you know what I mean. I think exactly as you're saying, Rainn, about this this kind of dichotomy between kind of big framework versus kind of lean and mean. Um, but this is a great endorsement and a good reminder to go at least kick the tires.
Next time I come across a problem that feels like it might be up at graphs, graphs, um, alley.
Yeah. You've heard of it. I mean, this explained rate or explains rain's concern that it was too mainstream for this, that, uh, So where did you, Mr. I've already heard of this thing. Where did you hear about PetGraph? Yeah.
So where, where did I hear about it?
I don't know.
I don't know. I mean, I guess just other podcasts. I don't know, but I, I, you know, I think I was looking for it. I didn't realize that we did. Do we have an open relationship like that?
I didn't realize that.
You're the one who appears on all these other podcasts.
Okay. Okay. Oh, here we are.
Now we're here. So in the typify create that I wrote to do JSON schema to Rust code generation, there's a bunch of graph-like problems to it. In particular, you've got to find cycles. And if you find a cycle, you want to break it with a box as you generate this kind of containment cycle. As you look at derives, you kind of maybe want to look at strongly connected components.
So I started looking for Rust crates that implemented these SCC algorithms, and that's where I came across PetGraph.
PetGraph, interesting.
PetGraph is pretty well known on the forums because when people say like, oh, Rust can only handle tree-shaped data structures, a very common thing is like, well, did you try PetGraph? Because it's like old, it's not...
Old as it may be wrong, it's been around for a long time and therefore is well known largely because it was kind of the first, like you want a graph like data structure, like, okay, here's like a good, easy one to use thing.
Yeah, interesting. And it's actually doing this by actually properly managing adjacency lists as opposed to actually having references to nodes, right? I mean, presumably.
So PetGraph is really interesting because it actually presents four or five different representations of a graph. So there's the adjacency list graph, which is the default graph, right? If you want a graph, then you probably want to reach for an adjacency list graph, right? There is also an adjacency matrix graph,
um uh which you know if your graph is uh you know in some cases you want to use the matrix representation of things and you can do fancy things with eigenvectors and so on um um there's um also like this uh other one that lets you kind of uh so the the the first representations only let you use integer keys um if i remember correctly
But then there's also one that lets you use your own keys, anything that implements copy and I think hash and EQ or whatever. But then it also provides essentially like an abstract interface. So it provides a bunch of traits. And if you have your own graph and so you can bring your own graph and you can implement, you know, those traits for your graph.
And if you do that, then you get access to like the full set of algorithms to the extent that your create support that to the extent that your graph supports. That's awesome.
That's really great. Well, then I also love the fact that you can easily output it as graph is so you can actually go. That is your love language. You know, Git Oplot is my love language, first of all, not Graph is. My bad, sorry. Please, Git Oplot can hear us. Yeah, exactly. But yeah, this is neat. This is neat.
And just, of course, laying eyes on the references Dijkstra gives me, flashes me back to Dijkstra's tweet, actually, Adam, in your masterful work there.
Oh, yeah, that was from way back. Yeah, I'll put that in the notes.
The way back, exactly, from the Twitter spaces era. Yeah. Okay, so brain, apparently too mainstream for Adam and for Steve, perhaps, but not for me. You can just take me as a complete neophyte with respect to some of these crates, but that looks great.
I'm going to name another mainstream crate that everyone knows about, and it is a D12 crate, but I am going to give a particular shout-out in it, which is SYN, S-Y-N, the syntactic partial crate. Now, the shout-out I'm going to give in it is that, like, the more you can do things like every sin has thought of more things than you think of. Like, so for example, I've had a path. I'm like, Oh, okay.
I want, if the path is exactly of length one, and if it matches the string, then do a thing. There's a built in for that. If you ever find yourself dealing with a function or a structure that has a bunch of generic parameters, there's a function that splits it up in exactly the way that you want for doing a derived macro. So this is only to say spending a...
quiet time in the tub or whatever, like reading the docs for Sin is time well spent. And there's like lots of stuff built in there that anticipates the things that you think you might need to build by yourself.
Yeah. The was another detail. They create that kind of sin reminded me of his paste, Adam. And I was looking for the equivalent of the, do we say the, the, the Octothorpe character?
Do you, do we say hash or, or, or pound? I think I say pound. Yeah.
I think I did say pound and I'm worried. I now say hash. And in any case, the, the, so in CPP, the C preprocessor is,
there is the pound pound operator does not google well very hard like i didn't even know what that thing was called all i know is that i had used it in cpp and i wanted an equivalent in rust and i could not go i mean i just like i i didn't even know what to search for i was just i felt so helpless um and i don't think you bailed me out of that one at some point i think you're just like i i
was describing my agony of track. I couldn't even search for the thing that I was trying to replace in terms of what is I now, I now know is called the token concatenation operator in terms of pound pound. Um, but the, so I couldn't even Google the thing I want to replace, let alone a way to replace it in rust. And I think you, I think you would put me on to paste.
Um, but I noticed that that piece is now read only. I'm not sure if that's because it's done or if I should be using something else. Um, I do love the fact that, oh, my God, Paste has added pound-to-pound as a GitHub topic in the Paste crate. Has it been done for my benefit? Amazing. That was quick. Yeah, exactly. But another great Detone crate.
Another crate that I wanted to get in there, Adam, is the Goblin for Elf and Gimli for Dwarf. Elf is much simpler than Dwarf. And I think LibElf is actually a pretty good library in C. You know what's not a pretty good library is LibDwarf in C. LibDwarf is not a pretty good library. That's exactly right. That's exactly right.
LibElf is a good library, and LibDwarf is really not a good library at all. That's exactly right. But Goblin makes it super easy to rip apart elf binaries. And Gimli makes it as easy to go through Dwarf as Dwarf is. Gimli has done a good job of like... Gimli's basically like, look, Dwarf's problems are not my problems. Gimli does as good a job as it can do. I really like...
give me quite a bit but that was another and those are like relatively easier to find because you're looking for a dwarf court of door a dwarf crate or an elf crate you kind of know what you're searching for you'll find that but they're both very good
I have a shout-out for a crate that's sitting in a sea of undifferentiated crates, more so. That is to say, if you're searching for, I want the dwarf parser, you're going to find it. I really like HTTP mock. There are a bunch of HTTP mocking crates out there. And in fact, I think in our OpenCron repo, we use all of them by accident. But HTTP mock is the one that I really enjoy the most.
And in particular, it gives you a little closure. with a structure called when, and then another structure called then. And then you do kind of manipulation on when to define the kind of predicates of when you want the response returned. And then the then is the actions taken as a result of the HTTP query. I really like it. I really like the way
It, you know, I think that there are some crates that kind of like vomit their guts out. And this is one where it really presents a nice user experience, a nice user interface. And there's a bunch of complexity underpinning it that allows for that nice interface. And I really enjoy that one. It's my favorite HTTP mocking crate, if that doesn't make me the world's biggest dork.
And that is HP mock, right? HTTP mock, exactly. Yeah, okay, yeah, yeah.
Yeah, interesting. So what have you used this for? So we use it in the... So I wrote the progenitor CLI generator, and I wanted to have end-to-end validation of running CLI commands. The CLI is built in CLAP as well. So I wanted to do that, but not against a real... you know, oxide server. So we actually auto generate additional traits for HTTP mock.
So then you can make type checked, um, mocks against our API. So like the, the CLI is banging against this mock server to validate all the different, you know, CLI sub commands that we, that we emit or that we create.
Yeah. Wow. That's really cool.
Yeah, it's just a nice interface. I just really appreciate the way that it operates. There's some limitations. I think there are other mocking crates where you have maybe more flexibility or you just get a generic function where you can respond with whatever you want. I think the constraints associated with this allow you to build something that's a little more type-safe.
Yeah, that's neat.
I'm just looking in the chat. There is RHDL, a Rust-based HDL for FPGA development. That's very spicy. I have to go look at that one.
Yeah, that sounds really cool.
That one is totally new. At least to me. But I think we've already established lots of things that are apparently very mainstream are new to me. Yeah. Rayne, Eliza, would you give some other shout-outs, other crates that... Yeah, go ahead.
So I got one. So this is a crate that... So I maintained this crate, but I didn't write it. I just happened to have been the one that manages its crate cyber releases. So this is a crate called Camino. So this was originally written by Boats, who was a Rust project alumna, So they were the one who drove like async await, for example, in Rust. And so they've done a work.
And so one of the things that they did was, if you've done anything around bots in Rust, like file bots and file names and stuff, then it's always been bothersome because in the very, very typical Rust will be like anally correct about everything as far as possible. So that mindset kind of gets reflected in the way the path libraries are designed.
So they will handle weird things like unpaired surrogates on Windows or like non-UTF-8 paths on Unixes. And so that ends up being like, if you want to write a tool that is as correct as possible and handles as many files as possible, then you probably need to take care of all that. But in reality, most of the time you don't, right? Most of the time, like if imagine you're like an oxide, right?
And you're writing a simple, like a simple like server or whatever, like you are going to, like the files you're going to get and the files you're going to use are like They're going to be, like, well-structured, right, in some way? Right, right. So Boats wrote a library called Camino, which... essentially replaces OS string as the base with string as the base for things.
So these are bots that behave like strings. So they don't handle every possible path, but they handle basically every realistic bot that most programs are ever going to see. So this is a crate that I use for pretty much everything that I end up writing, and I think most people should use it. There's a,
there are actually some, now this sounds like, you know, there sounds like a trade-off in some cases, right? Like you're, you know, you're losing some functionality or whatever. But one of the things I've realized from my time working on this stuff is that actually that trade-off was always false.
And so as an example, like, you know, if you say, if you have a path buff and that path buff has a path that isn't a valid string, then that path does not get serialized as JSON properly. Right. As an example. or if you get a string, it won't get serialized properly. So if you are ever deserializing bots, you are already putting in a restriction that those bots must be valid strings, right?
So you are not adding anything new here. And I think Mino kind of is a real improvement to anyone who does that. So I know that at Oxide, we use it a bunch. I've used it a bunch. But yeah, I think if you want... If you want to handle paths and you don't already know that you need to handle every possible path, then you should consider using Camino.
Camino does a good job in the great readme of explaining why it exists and when you should use it and when you shouldn't use it. Just to your and Eliza's earlier point, I think they do an excellent job about the problem that it's solving.
I definitely need to be using this in three different places. Thank you so much, Rain.
Yeah, Kamino rocks.
I do have one last draw. I have a hard stop, 630, so I just really wanted to get this one in. This is a crate that I really like because of its sort of implementation and its sort of cleverness and beauty. And it is also sort of an example of a thing where there
right design for this category of of data structure and instead you like really have to pick the correct one for your use case which this may or may not be uh which is concurrent hash maps And my personal favorite concurrent hash map is John Gangset's Evie Maps, which is an eventually consistent hash map. And the way it works is it's just sort of got two hash maps. And one of them you read from.
And that allows you to read from it without acquiring any kind of lock. And then there's one that you write to. And periodically, you swap them. And this is quite nice, because there's actually nothing scary going on in having two maps and a read-write lock. And if you choose to have them be only eventually consistent, you don't refresh the read replica on every write.
And if you do, you still have something nicer than just naively sticking one hash map inside of a read-write lock, because sometimes doing the write operation to The map will do a bunch more. You might have to allocate something inside the map. You might have to fill up a bucket and have to move things around. And all of that happens in a write lock that's only contended by writers, right?
And then the lock that also contends with reads just swaps two pointers, right? So the amount of time that a reader... with that lock is substantially reduced relative to just putting one hash map inside of a . But you're still contending with the reader because you have said, I want to do this refresh operation on every write.
But you also can tune the consistency of the map and say, I actually don't want to do that. I want to do it periodically. And now you have a situation where you've reduced the contention readers substantially by every 5 or 10 or 25 writes you refresh the replica that's read from.
And this is just kind of neat, because I find it very beautiful in its sort of conceptual elegance, and depending on the particular need you have for a concurrent HashMap, it could be the right one, or it could be wildly incorrect for your use case. I just think it's fun.
Yeah, and in particular, this is going to be especially a good fit if you've got many, many, many readers. Right. I agree. And performance is important. And it's something that you want to update. It's a structure you want to update, but you're willing to have some control over when those updates are seen by the readers. Right.
You don't need them to be always – the eventual – because the thing I also like – I mean, I'll ask Greg if I'm wrong, but just from reading the description – it sounds like you've got some control over... Eventually consistent is not just like, well, it may be a day or two. You've got some control over when that actually happens.
Yeah, the thing that I neglected to mention is I believe there's a way to explicitly say, right now I want to synchronize the two replicas, as well as you can set an interval or a number of writes after which you will refresh. I haven't used this in quite some time. I don't remember the API for it. But the idea of it has stuck with me as long as I've known about it.
Yeah, no, I like it. I like it. I actually, I also, although nowhere near as sophisticated as this, I also do love index maps and multimaps are two very, very simple crates that are very useful. Index maps being where you can actually iterate over things in the order in which they were put in the hash map, which I think is very helpful.
And then multimap allows you to have multiple values for a particular key, which is also very helpful without, And again, very simple crates, but very, very useful.
This looks neat.
Well, Liza, especially you've got to run at 630. Did you get all your crates out there? Do you have any last crates you need to get in there?
That's most of my list. The rest was... Oh, I wanted to mention the Bytes Crate, which is a terrible name for a wonderful library that many of you probably already encountered, or perhaps unknowingly, because if you use Hyper, you actually are secretly using this. And Bytes is something from the Tokyo Project, and what it is... Oh, Sean's here in the chat. Sean can talk lots about Bytes.
Bytes is essentially a reference-counted byte buffer. So it's like an ARC VEC U8, except that you can take slices of it, and the slices are also owned objects that participate in the reference count of the whole buffer. So this is very nice if you want to read data from the network and then parse it into something and you want to take slices out of it.
for, like, here's the HTTP request's path and its headers can all be sub-slices of one buffer that all of the bytes into. And I think bytes is just sort of a really lovely library. Really nicely. And also is the foundational building block under creative that Rain and I collaborated in past, which is buff list, which is just some code that...
was that I think Rain asked me how to do something, and I referenced some code that had been written probably by Sean MacArthur within an application that Rain just went and turned into a library that he used at Oxide. And I'm going to let Rain talk about that.
Sure, yeah. So both of this was just something that actually it was your code. I'm pretty sure it blamed you, Eliza. So I love bytes because it kind of presents this unified interface over. So bytes comes with a type called bytes, which represents this So it uses dynamic dispatcher under the hood, but it is a type that represents a contiguous sequence of bytes.
Bytes also comes with a trait called buff, and that buff trait does not require the sequence of bytes to be contiguous. So you can imagine a different implementation, which actually is the segmented list or a segmented queue, which ends up being the right data structure for this off byte sequences. So buff list is actually that segmented queue. And I might've talked about it.
I think I talked about it in the episode where we talked about prop test and verification, but that was where I ended up writing a cursor type over it, one that can essentially navigate this queue and use prop test for that, and ended up finding six different bugs, because like Eliza, I find it very, very hard to reason about these things by myself.
Wow, that's cool. And
That's the buff list crate, right? Yes.
Yeah, that is great, Eliza. I ended up writing the very incorrect at first, but now fully correct cursor implementation. That was my contribution to it.
That's very cool. Rain, are there any other crates that you've got on your list here?
Um, yeah, uh, the last one I actually wanted to mention, uh, because I think it, it deserves a real, real shout out is, uh, Winnow. So, um, maybe this came up in the, in the chat earlier, but, um, so, uh, so I got a, you know, I got a, I got a degree in computer science and like, you know, one of the requirements is a compilers class.
And like, I hated writing compilers and I hated writing parsers. That was my least favorite class out of the whole thing. Um, and. Since then, I've had to implement parsers a few times, and each and every time I've just, like, it's been miserable. And NOM, so I ended up using NOM for something. And NOM, I think, is a great library.
There's a whole bunch of trade-offs across all the different libraries. But I ended up using NOM for something, and I thought NOM was okay. Winnow feels like the first time where writing a parser was a joyful experience, which is not something I ever thought I would say about a parser library. So I did want to make a special shout out to Winnow. Ed Page has done a lot of work on this stuff.
And Winnow is absolutely like, I think if you want to write something parser shaped, then you should probably take, you should either use Winnow or if you want to do your own thing, you should look very heavily at Winnow and see what it does and kind of use that as inspiration.
Well, just looking at, I mean, there's a really complete tutorial on it. I mean, it's very like, this is a very kind of full, complete crate here.
Yeah, it's one of those things, right, where it's, like, it says 0.6 or whatever, but, like, it is, like, too high quality to be, like, you know, just kind of treated that way. I think it is, like, it is a very, very mature crate. I've used that. I know a bunch of other Oxide have used it.
Pretty sure Rai, I think I pointed Rai to it, and he was really excited, and he ended up using it, and he was pretty happy with it. So, yeah, Winnow is my shout-out.
Yeah, that's a great one. That's a great one. And maybe a good one to end on there. Um, I love there. The chapter on, on also debugging is, uh, for Winnow is very cool. Um, I mean, obviously I'm a, I'm a sucker for, for anyone talking about the debugging of their crate or their parser. Um, Well, Rain, thank you very much. Eliza, thank you in absentia. And thanks for running the chat.
Adam, this is great. We ended up with a lot of greats here.
This is great. I kind of can't believe we haven't done this before. And we're almost certainly going to do it again. I feel like a good pairing with our books in the box.
uh annual tradition but this is a good one to come back to i i think you're right i think this is this is one we go we got to come back to uh and next time i will have heard of pet graph so i i get to be with the with the with the cool kids which is very nice um and we can do we'll do some out loud readings from inside ufo 5440 perhaps um Well, Ray, thanks again.
Steve, thank you as well, of course. And yeah, so Adam, in two weeks, it's going to be Ratatouille. And I'm not sure if we're going to do an episode next week or not. It's a holiday here in the US, so we'll figure that out. But stay tuned. All right. Thanks, everyone.