Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Pablo Galindo

👤 Person
238 total appearances

Appearances Over Time

Podcast Appearances

Like, for instance, if you're making a JIT for a dynamic language, one of the biggest challenges is not only, you know, adding the infrastructure itself for a particular CPU, but like architecture is that you need to do it for all of them and for every single OS because, you know, at the end you're writing machine code and machine code depends on like, you know, your CPU or

Like, for instance, if you're making a JIT for a dynamic language, one of the biggest challenges is not only, you know, adding the infrastructure itself for a particular CPU, but like architecture is that you need to do it for all of them and for every single OS because, you know, at the end you're writing machine code and machine code depends on like, you know, your CPU or

architecture, your platform, you know, so you need one. So the JIT is not the JIT or as we like to know, call it legit. But, you know, you need one for Python, sorry, one for Windows, one for macOS, one for Linux, but also one for AMD64, one for AR64, one for ARMv7. So you can imagine how hard this actually is because You basically are implementing a compiler, right?

architecture, your platform, you know, so you need one. So the JIT is not the JIT or as we like to know, call it legit. But, you know, you need one for Python, sorry, one for Windows, one for macOS, one for Linux, but also one for AMD64, one for AR64, one for ARMv7. So you can imagine how hard this actually is because You basically are implementing a compiler, right?

But this approach that the brand is taking not only has the advantage that is basically leveraging an existing compiler behind, in this particular case, LLVM, but also is doing it at the front. So it doesn't happen at compile time. All that assembly code has been generated before, and the only thing that we need to do at runtime is stitch those things together. So we have these templates.

But this approach that the brand is taking not only has the advantage that is basically leveraging an existing compiler behind, in this particular case, LLVM, but also is doing it at the front. So it doesn't happen at compile time. All that assembly code has been generated before, and the only thing that we need to do at runtime is stitch those things together. So we have these templates.

So Clang is basically, or LLVM is not a runtime dependency. It's just a build time dependency. So you build all this assembly code, and it uses the native compiler for your platform. So if you're in some forsaken platform, like, I don't know, AAX or something like that, it will work if you are able to run Clang there. So that's great, but also,

So Clang is basically, or LLVM is not a runtime dependency. It's just a build time dependency. So you build all this assembly code, and it uses the native compiler for your platform. So if you're in some forsaken platform, like, I don't know, AAX or something like that, it will work if you are able to run Clang there. So that's great, but also,

which is really cool, and most JITs need to also implement themselves, is that we are able to leverage an optimizing compiler for this particular workload. So not only that assembly code works for every architecture because someone else actually implemented the backend, but also we can leverage all the optimizations that everybody that's using LLVM is using.

which is really cool, and most JITs need to also implement themselves, is that we are able to leverage an optimizing compiler for this particular workload. So not only that assembly code works for every architecture because someone else actually implemented the backend, but also we can leverage all the optimizations that everybody that's using LLVM is using.

It's the same optimizations that Rust is using. Rust is using LLVM these days still. And they are using, I mean, if you program the IR, the intermediate representation correctly, and then you are able to leverage that well, which is not, you know, it's easier said than done.

It's the same optimizations that Rust is using. Rust is using LLVM these days still. And they are using, I mean, if you program the IR, the intermediate representation correctly, and then you are able to leverage that well, which is not, you know, it's easier said than done.

But the idea is that now you can use this set common of very, like these many years of compiler research just to make your code faster. And you can just leverage that immediately as opposed to having to reimplement all those things and have like SSA ourselves and like, you know, dreaming and all that stuff. Like now, you know, you just run Clang and you get some

But the idea is that now you can use this set common of very, like these many years of compiler research just to make your code faster. And you can just leverage that immediately as opposed to having to reimplement all those things and have like SSA ourselves and like, you know, dreaming and all that stuff. Like now, you know, you just run Clang and you get some

super cool assembly out we need to just stitch it together and feel like the symbols and whatnot but like it's a much like you can get results much much faster than we need to implement a full-blown git for uh for for python so very excited it's like a jit factory yeah right you guys use factories in python git template factory yeah

super cool assembly out we need to just stitch it together and feel like the symbols and whatnot but like it's a much like you can get results much much faster than we need to implement a full-blown git for uh for for python so very excited it's like a jit factory yeah right you guys use factories in python git template factory yeah

Well, this guy implemented that idea. So what do you think he's going to say?

Well, this guy implemented that idea. So what do you think he's going to say?

Well, as the other release manager of Antim version now. So I release 3.10 and 3.11. I mean, one of the problems that this thing, I mean, it has worked really well, I think. Like people in general, and just to be clear, like, you know, so Wookas doesn't kill me after we finish the podcast. I think it's a positive change, right? So in general, very good. I think it has been some predictability.

Well, as the other release manager of Antim version now. So I release 3.10 and 3.11. I mean, one of the problems that this thing, I mean, it has worked really well, I think. Like people in general, and just to be clear, like, you know, so Wookas doesn't kill me after we finish the podcast. I think it's a positive change, right? So in general, very good. I think it has been some predictability.