Menu
Sign In Pricing Add Podcast

Ryan Worrell

Appearances

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1012.043

And the time that it takes just to copy the data around from machine to machine when you're scaling up or scaling down the cluster can be hours or days, depending on how dense you're running the machines. Some of that is alleviated with the new tiered storage stuff where the older data is moved to object storage, but that part doesn't alleviate the inner AZ networking costs.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1036.984

And there's another post on our blog about tiered storage and Kafka if people are interested in learning more about that topic.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1047.071

Apache Kafka?

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1048.572

Yeah. The project is managed by the Apache Foundation and has a variety of contributors across a ton of companies. And I would say it's a fairly healthy example of an open source product in terms of like having a big community.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1091.084

So there are a lot of practical challenges with improving a large open source project with a lot of users and a lot of dependent parties, I should say. Not even necessarily just users, but stakeholders of all kinds. Making large sweeping changes is essentially impossible. It's not. The amount of code churn required to...

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1115.987

take open source Kafka and get it to something resembling the architecture of Workstream is just not going to, that's not going to happen in any reasonable amount of time. That's the first part. If you just wanted to abstractly, no financial interests involved, how would you do this? It would be very hard, practically.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1135.131

The second reason is that WarpStream makes a pretty different set of trade-offs than the open source project does in terms of the environment that we expect users to run in. Now, I think those trade-offs are correct for the world that exists today, but in the abstract, it is different than the open source project. So WarpStream stores data only in object storage. That's step one.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1159.066

You need an environment that has object storage. And then step two is that we run a control plane for the cluster, which in the open source, the comparison would be kind of like if somebody was running Zookeeper or Kraft, which is their replacement for Zookeeper inside of the open source project.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1176.812

It's kind of as if we're running that for you remotely, and then you're running the agents, as we call them, which is the replacement for the Kafka broker. inside your cloud account. So just like there's a very specific topology that we're prescribing to our customers as well. That's different.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1192.099

Probably wouldn't fly in an open source environment, or at least would make it even more challenging to run potentially. I think those are probably the two biggest reasons of why we couldn't just improve Kafka is just it would be too hard practically to make improvements. And then also we're

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1204.973

We're making trade-offs around what we think the world, like how we see the world existing today and how we think it's going to continue to exist in the future that a lot of the stakeholders to the OpenService product may not agree with our assessment there, basically.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1281.158

Yeah, the way that I like to explain that, the networking cost side, is that when you're renting space in a colo or you have your own data center, you're implicitly paying for what is kind of a fixed capacity resource. It has a very high fixed capacity, but you are essentially paying for a resource that has a fixed capacity without doing a bunch of capital improvements to your data center.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1304.857

Whereas if you're in the public cloud, you can show up and put a credit card down and start moving gigabytes a second across the network without asking anybody for permission, nothing. So you're paying kind of a tax for that flexibility of being able to show up without asking anybody, all of a sudden start moving a ton of data.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1325.671

And especially in terms of how spiky you can do it, like you can write 100 gigabytes a second for one minute never pay Amazon any money again. They have to do some capacity planning on their end, just like they do for every other service and why they charge higher on-demand rates for EC2 instances than if you go and buy a random server off the internet and put it in your house.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1351.785

The cost looks very different. Now, whether that cost is right, whether that reflects real economic realities, I don't think anybody can say without being inside of Amazon, but I think there's a pretty logical rationale for why it exists that way, because there are people that will consume bandwidth in a very different way.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1368.217

You have to think about the worst case scenario users, basically, of your service, the people that you might even call it abusers of your service in terms of your cost profile. So I think that's why, as you're saying, you're correct that LinkedIn can just decide to use Kafka in a different way internally to match their ability to provision infrastructure.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1390.806

And Amazon can't really force you to do that in any way other than just charging you more money for it. So that's why they do.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1404.81

Yeah. So Richie and I met a little over five years ago now at a conference. We met at Percona Live. I think it was 2019 in Austin. Okay. And he was working at Uber at the time. Okay. And yeah, so we did eventually both end up joining Datadog, but that was a little later.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1437.048

Yeah, so my co-founder, Richie, and I, after he left Uber, we started working on a prototype of a system that was... The idea was basically a snowflake for observability data. That was like the elevator pitch. And we were going around pitching that to investors at the time, and that's how we got to know some of our investors in Warframe today is we met them back in those days. And...

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1465.783

That eventually caught Datadog's attention. And we ended up joining Datadog together to build that system. Husky with some of our current colleagues at Workstream were also there at Datadog building that system with us. Basically, the idea there was to replace the legacy system inside of Datadog for a lot of the kind of

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1491.127

basically anything that you can think of that's not pre-aggregated time series metrics. The idea was to think of it as timestamp plus JSON. That was the data model, basically. And we wanted to move all that data to object storage for a ton of different reasons for it, similar to the reasons why WarpStream is useful.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1516.383

Yeah, over the three and a half years that my co-founder and I were there, we migrated all of the products that were using the legacy system over to ASCII.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1549.147

Yeah, we started from scratch and writing it in Go.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1581.701

Yeah, there's definitely a lot of high-level conceptual overlap. The systems are extremely different, because one looks more like an OLAP database, and the other is, I mean, Kafka is more like a log. So there's some... very high-level conceptual similarity. And I think the thing that we really got the most experience with there was learning about object storage.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1606.316

So that's about where the similarities stop is just the deep experience of understanding how object storage works at scale in all of the major public clouds was a hugely valuable learning experience for us to know that when we left and we were doing the back-of-the-envelope math could we make this thing work that experience less?

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1630.711

The experience with object storage that we learned there was pretty helpful. Now, I think a lot of object storage, people talk a lot about object storage nowadays. So I think that's not an unknown thing to understand the characteristics of working with it nowadays. But I'd say in 2019, that was a fairly different story.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1647.845

I think the only people that would know a lot about building high-performance systems on top of object storage, they were probably all either inside the public cloud providers themselves, or they were working at Snowflake or a similar company. The knowledge was not super well distributed at that time. Most people, when they think of object storage, they think of something that's super slow.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1668.02

They're thinking about it in terms of seconds of latency to do anything. And they just think you have to rework your... The numbers around it are very different than what people might think of off the top of their head. And that opens up a lot of design possibilities that you don't think of immediately.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1920.74

Yeah, it's not really one secret trick. I think it's just a conceptual framing that you have to think of it as if you had access to a very large oversubscribed array of spinning disks. If you think about it like that, then the conceptual framing of how it works will make, like how you design a system around it will make a lot more sense. So there's a couple different pieces of that.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1950.012

Really large, like way bigger than your individual application. So like you have the world's biggest RAID 0 of all the disks ever. It's actually unlimited. So think about it that way. But also oversubscribed. The latency characteristics of it are highly variable. One request might take 10 milliseconds, and the other takes 50. And there's no discernible reason to you why that is the case.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

1976.852

It's just that is how it works. So you have to design around that a little bit in terms of retrying requests speculatively and that type of thing. But if you have that framing of it's very large, cheap storage with variable latency characteristics, if you rework your application to think about how it would make it work on top of that, then you've got the right framing.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2000.475

The reason why it's so challenging for people today is that they spend all their time thinking about the fastest storage that's available today. They spend a lot of time thinking about persistent memory or NVMe SSDs, stuff like that. They think about that first when they're designing their application. How do I get the lowest possible latency?

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2023.323

Making your application work on that first and then trying to add object storage on top is a very popular thing that people try to do. They always call it tiered storage. Basically, every system that has that calls it tiered storage. And it's very hard to match the characteristics of those two things together going top down.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2042.316

Whereas going bottom up the other direction, starting with object storage and then layering stuff on top, it seems like it should be the same, but it's not. You don't end up making the same design decisions along the way. And that has a big influence on the overall characteristics of the system. And I can explain specifically what that means for Kafka in terms of tiered storage.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2064.064

So they were thinking about disks first, like local NVMe SSDs. That's usually what people are running on these days in the cloud. The way that that influences the design is that the way that they implement tiered storage is they just take those log files on disk that have all the records in them, and they copied them over to object storage. That solves a cost problem.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2087.865

If you never want to read that data again, you're good. That's cool. It's much cheaper now. When you want to come back and read it, let's say that you wanted to read all of it, like all of the data you've ever tiered off into storage, the way that that works in the open source project is that you'll end up reading all of that data you're going to have to pull back through one of the brokers.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2113.247

There's no way for you to parallelize that processing because they just view it as this bunch of log files that I put into object storage. And with Orbstream, we've kind of decoupled the idea of the local storage being owned by one machine to now there's a metadata layer that says, these are all the files that exist.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2136.562

And then we have all these stateless agent things that can actually pull the data out of object storage for you. So you can scale up and down. as quickly as you need to to read all that data out of object storage. So you wanted to pull it all out. You can scale up temporarily for the hour that you want to run some big batch job and then scale back down at the end.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2155.275

With the open source tiered storage in Kafka, that's a lot harder because they started with the local disk part, which makes sense because that's what existed before. It just means that adding stuff on afterwards, you're usually the tiered storage, lower layers of storage is like a secondary concern. It doesn't get as much love and attention

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2174.386

as the primary storage gets, and you end up with a very different system at the end.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2201.488

Yeah. So Kafka has, let's start with topics. Topics are basically just a name for mapping consumers and producers together. They agree on the name of a topic for how they're going to where they're going to send the data to and where they're going to consume the data from. And within a topic, there are partitions. And a partition is basically just a shard to make that topic scalable.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2224.521

There are a lot of different ways to decide which shard you're going to write the data to. But let's just say, for now, you do it by hashing the key of the message and then routing it to the shard based on the hash of that key. So if you have the record with the same key, you'll end up going to that same broker every time that owns that partition. So that's how it works in the open source product.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2247.467

The brokers own some set of partitions from a leadership perspective. And then there's also replicas of that that are just copying the data. And it's just other brokers that are the replicas for those partitions. So the broker will write that data that it receives from a producer client down to the local disk and replicate it out to the followers. And then

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2272.159

a consumer can come along and read either from a replica or the leader the data that producer wrote. But they're all coordinating on essentially one of those brokers owns the partition specifically that I'm interested in and reading. So that's how it works in the open source product and in Warp stream, we've decoupled the idea of ownership of a partition from the broker itself.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2301.499

We have a metadata store that runs inside our control plane that has a mapping of, here are all the files and object storage. And within those files, the data for this partition for this offset is here. It's in some section of a file in object storage.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2320.427

So any of our agents, which are like the stateless broker that speaks the Kafka protocol to your clients, any one of those agents can consult the metadata store and ask, I want to read this topic partition at offset X. Where do I have to go in object storage and potentially multiple places in object storage? Where do I have to go in object storage to read that data?

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2342.685

But because the metadata store inside the control plane is handling the ordering aspect of it, essentially, you get the same guarantees as Kafka in terms of I have this message with this key that's routed to this topic partition, and I want them to stay in the same order because I'm writing them in a specific order. That ordering part is enforced by the metadata store inside the control plane.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2364.234

But the data plane part of actually moving all of those messages around is only inside the agents and object storage. So it lets you do that thing that I was saying before, where if you want to scale up and down, it's very easy to do that because you don't have to rebalance those partitions, which take up space on the local disk amongst the brokers in order to facilitate that.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2391.066

In terms of being faster, it's faster at the fact that there is no rebalancing that happens. Because the data is always just in object storage somewhere. You don't have to do any rebalancing for it. That part of it is faster. There's obviously a trade-off when you do this in that the latency of writing to object storage is higher than writing to the local disk.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2409.735

So if you want your data to be durable, you have to wait for the data to be written to object storage first. So that's the primary trade-off somebody that's using Warpstream would be making is that they're comfortable with around 500 milliseconds at the P99 of latency to write data to the system.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2428.992

And then the end-to-end latency of like a producer sends data and then it's consumed by a consumer is somewhere between one to one and a half seconds again at the P99.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2448.515

So it's interesting that you use that word real-time because we've talked to a ton of different Kafka users. And when you ask them, what is your end-to-end latency of your system today? A lot of them don't know the answer. They think that they know the answer. Well, it's real-time. Yeah, they're either not measuring it, where they're measuring it in a weird and incorrect way.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2476.762

There's a lot of different ways that that can happen. But typically, the way that we've experienced is that if you ask an executive at the company that uses Kafka heavily, ask them, is your application latency sensitive? They'll say, of course. We're an extremely high performance organization. We love high performance systems.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2494.307

Obviously, the intent latency couldn't be anything more than 50 milliseconds. That would be crazy if it were anything more than that. And then you make it a little bit further down the chain in the organization. You ask the application developer or the SRE who's actually on call for the thing or wrote the code. You ask them and they're like, I don't know.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2514.426

I hope that it's fast, but I'm not really sure. Or you ask them and you get an explicit answer that's very different than the answer that the executive gave you. Yeah. Realistically, there are a few applications that we come across that do need that low latency.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2531.129

And the primary example of that, I mean, there's a lot of this kind of application out there in different domains, but the good example that demonstrates it is credit card fraud detection. The way that... There are people out in the real world using credit cards, and you want to make a determination about whether a chart is fraudulent at the point of time that they're swiping the card.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2554.652

So that is necessarily a real-time thing. There's a user who's waiting out in the real world. And if Kafka is in the critical path, especially multiple hops through Kafka in the critical path, then a system that has higher latency, like WarpStream, would be harder to adopt. And there are other applications that meet this criteria.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2575.881

But basically, if the user is in the critical path of the request, then WarpStream is harder to adopt in the abstract. Obviously, some specific applications might be OK with higher latency than others, but that's the one that we see from time to time. When you strip all those out, though, the things that you have left are the more analytical type applications.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2599.442

Like the example I was talking about before, moving application logs around. Developers are pretty used to some delay between the log print statement running inside their application and being searchable inside wherever they're consuming their logs from. So the additional one second of latency there is typically a non-issue.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2622.916

And the reason why that's useful for us as a company at Workstream is that those workloads are typically really high volume and they cost the user a lot of money. So our solution being more cost effective really resonates with them because usually there's also a curve of, The more data you're generating, the less valuable that data is per byte.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2647.172

So there's like budget pressure to get the efficiency to process that data. You want to increase the efficiency of processing that data and Kafka sticks out like a sore thumb in terms of that. processing cost.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2660.084

So we can come in and say, hey, because of the way the cloud providers don't charge you for bandwidth between VMs and object storage, and we store all the data in object storage, that means you're going to save this many hundreds of thousands of dollars a year on sending the dumb application logs that you're generating into the eventual downstream storage, that makes a lot of sense to them.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2685.286

So while we understand that we can't hit every possible application in the market with the shape that Workstream is today, we're pretty happy with the set of use cases and workloads that we can target because there are just so many of them out there and they happen to align with the budget-sensitive ones.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2715.654

So the writes are around 500 milliseconds at the P99. That's tunable. By default, we have the agent buffer set.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2725.356

the records that your clients are sending in memory for 250 milliseconds before writing them to object storage, so that you just write fewer files to object storage, which is the primary determinant of the cost of the object storage component of the system, if you're not retaining the data for very long.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2739.941

But you can shrink that down all the way to 50 milliseconds, in which case then 10 latency, or sorry, the produced latency at that point would be probably ballpark 300 milliseconds at the P99.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2751.175

For, I said end-to-end instead of read, because that's typically what people talk about in Kafka terms, because they wanna know like a producer sends a message, how long does it take until a consumer can consume that message successfully? So that's what I mean by end-to-end, and that is one to one and a half seconds of the P99 for most our users.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2779.952

So there really aren't that many downsides other than the latency. The latency is what actually enables all of the benefits of WarpStream, basically. The object storage is what enables a lot of the benefits. We have a couple of interesting features that are based on the fact that all of the data is in object storage. One of them we call agent groups.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2803.462

And Azure Groups let you take one logical cluster and split it up physically amongst a bunch of different domains. They could be like different VPCs within the same cloud account. It could be different cloud accounts. They could be different cloud accounts or same cloud account, but across regions. all by just sharing the IAM role for the object storage bucket between those different accounts.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2829.628

The alternative to this with open source Kafka is like setting up something crazy like VPC peering, which is extremely hard to do. And your security team will probably not be super happy if you try to ask them to peer a bunch of VPCs together because it introduces more security risks.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2846.747

So we have customers in production using this feature today, where the example that we usually give is there's a games company that splits their production games account, where all the game servers run, from the analytics account, where they do like the, so they run a bunch of flank jobs to process the data generated from the production account.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2867.651

And they run agents that just do produce, so just writes. They run that in the production account. And they run agents that just do fetch inside their analytics account. So they've kind of flexed the cluster across those two different environments. And all they had to do to set that up was share the IAM role on the object storage bucket instead of peering the VPCs together.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2888.669

So the fact that everything is in object storage opens up a ton of new possibilities, actually. Basically, the only downside of WarpStream is the fact that the latency is higher. Now, obviously, we're a new company. The product does not have the 13-year maturity of the open-source Kafka project. But just to speak of the operational...

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2913.055

stuff and the cost stuff, the Workstream is a huge win on both of those.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2931.34

Yeah, so there are a number of projects and products out there that you can buy to give you an object storage interface in essentially any environment. Like there's the open source project, MinIO, and then basically every storage vendor on the market will sell you something with an S3 compatible interface if you're running in a data center environment.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2951.756

And because we work with S3, GCS, and Azure Blob Storage, we can essentially, you know, I shouldn't say connect to anything. If you had an NFS server, we can even make it work on that too. We don't have any production doing that, and I wouldn't recommend it. I would recommend using the object storage interfaces, but we're pretty flexible in terms of the deployment topology.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

2986.616

So I think it would depend on where you're running the compute. If you were storing the data in R2, but you were running compute in AWS, you would get charged a lot of internet transfer as part of that. If you're running your compute in one of the providers that has free peering with R2, then yeah, you would get a nice savings there and you're, you know,

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3010.731

You'd be able to move data reliably across, let's say, multiple regions of whatever providers have peered for free with R2 using Workstream.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3091.443

Yeah, I think the demo was Richie's idea. It basically just starts up a producer and a consumer so that you can just see something happening in the console. Like, yeah, it provides you a link. If you would have run that locally on your laptop, we would have opened the link automatically in your browser for you.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3111.724

Yeah, so we even designed the little niceties like that. But the idea behind the demo is basically just to show people that it does something. Kafka is not an exciting technology to demo, so we're kind of limited there. It's even more boring than doing a demo for a relational database or something. But there is another mode that you can run that's called Playground.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3135.932

And Playground will let you start a cluster that doesn't have a fake producer and consumer running on it as a demo. It just starts a cluster for you temporarily and makes an account that expires in 24 hours. And you can take that Playground link and you can start...

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3153.977

multiple nodes, like say one on my laptop and one on yours, and point it at R2, and we can have a cluster that spans our two laptops together. Like my co-founder and I did that before and posted a video of it on Twitter or something like that. But because the data is all in object storage and the compute part is stateless, it's actually, it's not that complicated to do.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3180.286

It's basically the thing we were talking about a second ago with R2, just connecting two laptops instead of two different regions or something like that.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3193.516

Yeah, so there's three different commands primarily that people would run. There's warp stream demo, there's warp stream playground, and then there's warp stream agent. The agent is like the one you would run for production to start an agent. And the playground one is how you start a playground.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3209.499

I think the playground even gives you, like it spits out in the output, the command that you would copy and send to somebody else to start it in another terminal. It's been a long time since I've played with it, so I may be remembering wrong. The reason why people like the demo

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3228.826

or I should say the Playground, is that it makes it easy if you're a developer to just start a cluster and use it for local development instead of having to run. If you use WarpStream in production, you want to use the same thing in your development environment just to ensure consistency.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3247.441

You can use Playground mode to create a cluster, and it will just go away when you stop using it, and there's no cost.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3288.726

Yeah, totally. It's the, the playground has been a lot of people have found the playground and the demo people have found a lot of joy in because they're, they're just cool.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3298.312

We also have a serverless version of the product that basically just gives you a URL that you can connect to over the internet for us, you know, to fulfill a similar purpose, basically for people, if they want, if they want to try it out without actually doing anything locally on their machine, I think we give a new accounts like $400 of credit when they sign up.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3316.595

So you can do a lot with that if you just want to play around without actually starting any of the infrastructure.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

336.452

Thanks, it's great to be here.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3395.522

We got there by just talking to people, basically. The number of developers out there who are using Kafka, it's really high. And we talked to a lot of them. And when we asked them, basically, what do you not like about Kafka? They would give us a bunch of different answers. But when we would ask them, if we could fix those problems for you, would you want to do that?

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3422.273

And it would involve essentially rewriting large parts of your application. It's a non-starter for people. And there are a bunch of other things out there in the world that integrate with Kafka, like Spark and Flink. And there's a bazillion open source tools out there that integrate with Kafka. You know, we have no influence on any of those things either, really.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3445.516

So it was kind of a choice that was forced upon us. There's really no way Kafka has so much momentum behind it that it's pretty much impossible to get broad adoption of something that would be a replacement for it. without having the exact same wire protocol so you can use the exact same clients and stuff like that. It's a lot of work to maintain that compatibility.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3472.152

Thankfully, a lot of that work is front-loaded. It's just you do it once, and Kafka is not a particularly fast-moving open-source project, so they're not changing the protocol every day. Backwards compatibility is very good with Kafka, so thankfully it was mostly a one-time cost, but it's opened up a lot of opportunities because we are compatible

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3490.246

To even just doing basic stuff for the company, like being able to do co-marketing with other vendors of products that are compatible with Kafka. If we weren't compatible with Kafka, you know, we would be able to do that. And a lot of the open source tools that we could that we would want to integrate with, like, let's see the.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3510.774

open telemetry collector or vector, these kind of observability agent tools, they all can write data to Kafka and we inherit that benefit right out of the box. So it's been super important for us basically to have that compatibility.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3549.48

Yeah, so we have a number of large use cases in production today. I can't talk about very many of them, unfortunately, but there are warp stream clusters out in the world processing multiple gigabytes a second of traffic through, and not just like one of them. Like there's a decent number of them at this point. And where we're having success in the market is basically

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3577.463

The large open source users who are, you know, they feel like the open source product is a bit too challenging for them to run. And there's budget pressure all over the industry today, especially in the, you know, in the corners that we're interested in, like in the observability and security areas. On the analytics side, there's a lot of budget pressure.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3598.237

So we're a pretty natural fit for those folks who are both tired of running the Opizars project and they're getting budget pressure to decrease their cost. We're having a lot of success there.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3655.461

Yeah, so I think that for Greenfield projects, there's two different branches of those. There's Greenfield products that are only Greenfield in the sense that they're trying to adopt Kafka for some goal. They're not Greenfield like the application didn't exist before. There's that aspect of it where they're just new users of Kafka.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3676.586

And then there are truly Greenfield projects where the project itself is new and also the choice to choose Kafka is new. And usually those products don't have a super high volume of data. It's the existing initiatives or applications within a company that process a lot of data but are not using Kafka for cost reasons where we are having more success.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3702.135

There's a product that I would love to talk about that won't quite be public by the time this episode is posted, but they're in that first category where it's a large existing product. workload, but they were not using Kafka for a bunch of different reasons, cost being one of them.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3723.869

And they're now a big Workstream customer because they saw that there are benefits to using Kafka for their application, but they just couldn't use the open source project for cost reasons. And now essentially they can. There's a lot of cool stuff that they can do now that they couldn't do before that Kafka enabled them to do.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3746.247

And WarpStream is their Kafka-compatible product of choice for those cost reasons. And they're starting to get some benefits from it now.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3799.804

Yeah. So we had a lot of back and forth initially when we were thinking about this specific issue. The conclusion that we came to is that in order to be successful commercially, we cannot release our product as open source. And we did not want to pull the kind of bait and switch intellectual dishonesty move of the way a lot of commercial open source projects have evolved in the last decade.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3832.216

five years in terms of either relicensing or changing the focus of the project drastically to benefit the primary commercial backer. And we just didn't think that it was, we're providing a lot of value by providing a solution that is dramatically lower cost and also compatible with the existing ecosystem.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3862.526

And the way that that works in practice means that you can switch away from WarpStream because you're not locked into it from an application perspective or a protocol perspective. So we're not locking you into something proprietary from an interface perspective.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3877.675

So it's actually relatively easy to switch away from WarpStream if you decided to in the future because you didn't like something that we did. But we're hopeful that the fact that we provide something that's dramatically lower cost and easier to use means that you won't switch away.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3892.603

And you'll continue to have the best of both worlds, so to speak, where there is an open source thing out there that obviously is going to continue to exist because it has a ton of users. But if you want to use our product to save money and have something easier to use, you can as well. And we will be able to continue to invest in making that product better and better over time because we are not

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3914.818

We're not stuck in these kind of middle of the road outcome issues that a lot of commercial open source companies have where they're forced a few years down the line to cash in all of their brand goodwill on a relicense in order to gain that commercial success that they wanted. We're hoping to be able to.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3933.615

By sticking to this model, we're hopeful that we'll be able to be a good citizen of the Kafka ecosystem in terms of making a product that's not incompatible and proprietary and steering everybody away. And we do put a lot of effort into testing clients. We find bugs in Kafka clients that are typically open source and make improvements there.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

395.212

Yeah, Kafka is both a very interesting and a very boring system. The easiest way to think about it is it lets you create topics and you can have producers that write messages into these topics and consumers that consume messages out of the topics. It's kind of like a publish and subscribe type deal.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

3957.962

But the core part of the product is not going to be open source.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4029.617

Run way well into the... the next decade for the sites. Yeah, so why not bootstrap?

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4040.565

That's a really good question. And I think that the... Take a step back from that question for a second, talking about the commercial open source stuff. This is obviously a little bit inside baseball, but as a part of going through that decision process, we talked to the founders of a lot of commercial open source companies. And we asked them,

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4060.609

let's say you were starting our company today, what would you do? And without hesitation, the answer we got was, I would not start it as a commercial open source company today. And there are a lot of different reasons that they gave for that. And I can't really give some of those reasons without potentially identifying who those people are. And I don't want to do that.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4080.916

But the challenges of a commercial open source company today with the, it's not even just the hyperscaler cloud providers anymore taking your stuff and running it. That's obviously a concern, but you can get around that with, like the AGPL does a decent job of preventing some flavors of that.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4099.594

The other issue is just like the competition within the category that they're building their product in is extremely high. And having your source code out there in the wild and letting everybody know your secrets essentially about how you made your product better.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4117.586

the you you lose a lot of the juice behind why you have these huge staffs of developers working on interesting things it's not to say you can't protect that otherwise either but like with software patents and stuff like that but people don't the appetite for software patents it would do a lot of brand reputation i think if

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4140.53

companies created a lot of software, if these commercial open source companies created a bunch of software patents and started enforcing them against each other, for example. It's a very challenging situation today. A lot of the companies that you might view as successful commercial open source projects, they might be successful in the iteration that they exist in today.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4159.401

or yesterday, in the case of the holidays licenses, where they have good adoption in the developer community, and they might have good success in the VC-funded startup segment of the world. But there is an inevitable push to go upmarket and to go after larger and larger customers because it's effectively the only way to support growth.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

418.177

But the thing that makes it interesting is the fact that once you consume those messages, they're not deleted. So they're still stored inside the system and another consumer can go and read them again for a different purpose. Like if you have two different applications that are consuming the same data set, they can both equally consume those messages.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4183.065

The growth of what you can achieve within the small... If your customers are all small startups, even medium-sized startups, and developers playing around in their personal capacity or... Stuff like that. The revenue opportunity is just really small, unfortunately, for a lot of these businesses.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4208.518

It's much easier to sell a million-dollar-a-year contract to an enterprise than it is to get a million dollars of revenue out of a bunch of small and medium-sized businesses. So the temptation when the growth starts to slow down is I need to go do that now. Like that's the first thing your investors are gonna tell you is you need to go out market and get enterprise customers.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4230.585

If the product that you're selling them is support or a couple of features on top of an open source project, your ability to exert pricing pressure on that enterprise buyer to get them to pay a higher price or to get them to pay at all

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4248.601

In the case of a lot of these open source projects where they spent so much time making it good that the enterprise can just hire one person to maintain it internally and just move on with their life and run the open source forever and maybe pay you a peanuts support contract, essentially, not actually enough to support the business. It's just really hard.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4269.357

I completely understand where you're coming from and that it might have felt as if these companies were successful from the outside. And some of them definitely were. But just there is that inevitable pressure to keep the growth rate up. And the only way to do that is to go up market. And when you're going up market, you need to provide something that looks valuable.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4291.686

And if your project is open source and the alternative is hiring a developer or two to maintain it internally, you kind of have a cap on how much you can charge. And it's the same thing if you're offering a a cloud version of an open source project, for example.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4306.431

The premium someone will pay for your cloud version, it may be lower than you expect if they can self-host, because they're always looking at that. They're looking at both sides of the coin. How much will it cost me to self-host this versus how much does it cost to use your cloud hosted version? And that calculus does not always come out in your favor as a vendor.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4325.499

And you may want to charge, you may have to charge significantly more. to make the numbers work on your side than what they think they can run it for internally. It's really challenging stuff, and we wanted to provide the best product possible with the best product experience possible.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4343.65

And we didn't feel like the shape of an open source, commercial open source company was the right way to do it without having a lot of these distractions about the things that I'm talking about right now come up along the way. And we didn't feel like it would be right to do that, the bait and switch thing that people are doing these days. We wanted to be honest, basically, from day one.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

439.47

Let's say that you have one application that does machine learning training and another that does alerting based on the two different, like the same messages you want to process them, but you want to process them in different applications. Kafka is a useful tool for that.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4415.698

It's totally possible. Yeah. And you're exactly right. If one of our competitors came up with a better implementation tomorrow and it was...

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4466.567

Yeah. I would have no, I would not harbor any ill will towards someone who decided to do that.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4506.125

And the reason why it doesn't bother me so much, basically, is the portion of the Kafka market, Let's say because we have commercial competitors, obviously, the portion of the Kafka market that has been commercialized, let's say somebody is paying for a licensing fee or some of their fee to use the product, not just hiring somebody to run it for them.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4530.409

The portion of that market that's been commercialized is very small. So there is so much greenfield market out there for us to commercialize, along with this constant, ever-increasing trend of things becoming more real-time. And these other tailwinds of more observability and security data being generated in the world, there's just...

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

455.816

It also provides ordering for those messages so that if you need to implement an application where you send messages in a certain order and you want that order to be retained on the other side, Kafka also does that for you. Each message is assigned a unique offset within a partition of that topic, which is kind of like a shard.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4556.803

This market is just going to be so big in the future that I think it's unlikely to have a winner-takes-all dynamic similar to the way that there are multiple large public cloud hyperscalers that exist and are very profitable. And there's just so much of this market out there that we're not super concerned about any particular market.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4582.25

competitor even if one were open source there's there's a lot of other dimensions that we would hopefully be better at competing on that you don't get out of just the fact that the product is is open source that you know combined with the fact that the market is is so huge that we're we're pretty happy with our our position as it is today

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

476.924

And within that shard, if you process the messages in the same order again, or if you process the messages in that partition again, you'll get them back in the same order every time. So you can implement something like state machine replication or that type of thing where the ordering matters.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4943.692

So the reason why people raise money is let's only put it for me. The right reason to raise money is that you want to go faster. That's basically why someone should raise venture capital, is they have something that's working and they want it to go faster.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4962.044

My co-founder and I had so much conviction in what we were doing in terms of it being commercially successful that we knew on day one we would be able to go much faster if we raised money. So that's why we did it. There was never a period of time where we were guessing like, oh, do people need this? It was like very obvious to us from day one that we wanted to go as fast as possible.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

4990.733

And raising money is the way to do that because we were able to hire people a lot you know relative to the two of us many more people and pay them very well and make them happy and support you know make it hiring people that are good at distributed systems stuff is very expensive and the those type of people also really appreciate job security

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5016.826

So being able to have a bunch of cash in the bank, even if we're not spending it, is very important to those folks. So our internal stakeholders, you know, as employees and founders and stuff, it makes it very comfortable to have that cushion and allows us to hire people that will make things go faster. And then on the complete other side of the coin, if you want to sell something,

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5040.009

products to enterprise buyers as two people without having raised any money, it's going to raise a lot of eyebrows if they want to put that in production as the backbone of their multi-billion dollar business.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5055.275

It's really hard. Whereas if we can walk into a meeting and say, hey, we've raised roughly $20 million from Greylock and Amplify Partners, who are our Series A and seed investors, respectively. that sidesteps a lot of really awkward conversations about like, what's gonna happen to you founders if you get like hit by a bus tomorrow or something?

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5077.437

Obviously that'll be very bad for the company, but there is at least somebody else who cares and would like to continue to hopefully see their investment succeed. So the dilution stuff is really, obviously it's a good point, but you just have to think, are the odds of success higher? And will the eventual outcome be bigger if I raise VC? And if that is true, then I think it's worth doing.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5106.536

But if you're in a position where you don't know if your product is gonna be commercially successful, it closes a lot of doors to raise VC. Like every further round that you raise, it makes it harder and harder to explore different kinds of exit opportunities that you might personally view as a success, but your venture investors may not be as a success.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

511.161

Yeah, the reason why it's useful is there just isn't a lot out there that fulfills those, you know, the two main things. It's like a publish and subscribe mechanism that's scalable, right? And then also, that lets you have different consumers process the same set of messages without one of the consumers deleting it.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5128.429

So it's definitely a balancing act, but you just have to go into it with your eyes open and understand what you're, you have to understand the game you're playing, basically, and walk into it with your eyes open.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5139.71

Yes. Very briefly, a long time ago, unsuccessfully, I did. Yeah. And in between that and starting Warpstream, my co-founder and I were considering raising money for the thing that we were doing before we joined Datadog. And that's how we got to know our seed investors at Amplify Partners. And we didn't have that conviction at the time to say, let's go raise money. This is going to be huge.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5170.416

In hindsight, we probably would have done very well with that had we chose to raise VC and remain as an independent thing and all that instead of joining Datadog. But because we didn't have that conviction, we took the quote unquote exit opportunities that were available to us at that moment because we hadn't yet raised money. We're very flexible.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5192.101

So we were able to join Datadog and it worked out super well. We got to meet a bunch of interesting people and the project we were on was successful and super fun and all that stuff. But because we did have that conviction this time around and we wanted to go as fast as possible, that's why we chose to raise money this time around.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5252.067

It's been a long time since I've heard any Bobby Brown, but I do indeed a little bit.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5329.91

And that's only because we spent a lot of time thinking about it and a lot of time talking to folks who are day-to-day building commercial open source businesses that really brought our perspective to where it is today. And it's not to say that there are no possible opportunities to start a commercial open source company that would be successful today. There obviously are.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

534.343

There's a lot of queuing systems that the messages, when you consume them once, they're just gone forever at that point. The purpose is to consume the message and then have it go away, not to reprocess it again in the future. There are a lot of use cases for it. I'd say that the most broadly popular one is for moving data from point A to point B, kind of like a dump pipe.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5352.714

It's just that for our particular market and the strategy that we were pursuing, it just wasn't going to be, I think I can put it a little bit more crisply. The segment of the market that we're going after is already price and cost sensitive. If we offered them the opportunity to run our product for free, the odds that we will be able to charge them almost any money would be pretty low.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5381.363

There are other markets out there that have completely different dynamics in this, especially if you're not trying to provide the low cost solution. So I didn't mean to denigrate commercial open source companies. I was just saying that when we explained our strategy, basically, to these other commercial open source founders, they said, that's going to be hard. It's going to be very hard for you.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5406.579

So you should think about it before you choose to go down that path. And we chose this path because we think it's most likely to be successful for us while also I would be personally very upset if I had to do one of those license change rug pulls. It would make me very sad because I know it causes a lot of consternation and heartburn for people when those things happen.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5433.539

So we just wanted to be straight up with people from day one.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5447.836

Yeah. I mean, it's a, it's a general purpose infrastructure building block and like Amazon has AWS and Amazon has MSK as a competing product with, with warp stream. Um, so they very directly could just, you know, offer a new SKU of MSK, that is the WarpStream one, if it were open source. That would be very challenging for us.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5479.924

Yeah, I mean, there are a number of companies out there that have talked about how they're doing this. I think the most notable of them out there would probably be Confluence announcement of their freight product. That's the, you know, probably the splashiest announcement of any of them where they're taking a similar direct to S3 approach as Warpstream does.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5511.563

And the product isn't available today for anybody to just go sign up for and do a comparison. But they've made an announcement, and I'm sure that's going to progress more in the future. I'm sure essentially every one of our competitors, if they haven't started working on it already, a similar storage engine, they will. So I have no doubts that the cat is out of the bag, so to speak, on the idea.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

557.553

It's used a lot in observability and security-related workloads, where you have a lot of application servers that are generating logs, and you want to temporarily put those logs somewhere before you put them in something else, like you say you want to put them in Elasticsearch or something like that. Elasticsearch can be a little finicky.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5608.809

Yeah, I mean, there's a little slider that lets you turn on the breakdown mode of the comparison to open source Kafka running in three AZs or one AZ or comparing to AWS MSK. And we didn't even put a particularly big workload as the default on the pricing calculator. I think it's a pretty standard workload.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5633.278

And people are used to looking at big numbers when it comes to running Kafka for these kinds of observability and telemetry workloads. They just cost a lot. If you look a little bit further down the pipeline there, if they're sending the data to Elasticsearch or Snowflake or Clickhouse, they're probably paying significantly more for those things.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5658.972

So Kafka looks cheap in comparison, and then WarpStream looks cheap compared to Kafka. So we're very open about the fact that our product is designed to be more cost-effective.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5670.899

But we do offer additional, we call them account tiers, basically, where the things that enterprises want from you, the reason why they wanna pay you $10,000 a month is they want to be able to file a support ticket and have somebody reply to their support ticket extremely quickly. That's the thing that they're basically paying you for.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5693.045

That's the stuff that doesn't scale, basically, as you get bigger or your product gets better. Obviously, you might have fewer support tickets, but you still need humans to be able to respond quickly when somebody does file those support tickets.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5706.973

So our account tiers for pro and enterprise give customers a support response time SLA that they can count on that today is backed by the engineering team. Like if you file, like if you're an enterprise customer and you file a priority zero support ticket, which is just like my production cluster is down, I need help right away.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5730.182

That pages the engineering on call rotation and gets you help as quickly as somebody can respond to page reading. That's the type of stuff that people would be paying for basically on top. And that's how we make enterprises trust us. Another reason to raise venture capital, you can hire people so you can have a 24-7 follow the sun on call rotation in order to back those support response time SLS.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

576.309

So you want to have Kafka, which is a much simpler system in place, as a temporary buffer to hold those long messages that you want to write to Elasticsearch in case that Elasticsearch cluster is down or you're doing an upgrade or something like that. There's a lot of different reasons for it, but Kafka is pretty much the de facto standard for those kind of workloads.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5779.798

Sorry, I didn't hear the first year, your throughput number that you.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5785.66

Five gigabits. Yeah. Yeah. I mean, it's obviously as you get up into these larger and larger. Well, first of all, say 14 days, pretty long retention for most people for Kafka. Usually because it's a transitory, I'd say three to seven days. That's a pretty, that's a pretty typical one.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5806.136

And if you're at these kinds of scales, you're probably not paying your cloud provider retail price for cross-AZ networking anymore. If Kafka was a big part of your bill, that would be probably one of the items that you would want to negotiate with your cloud provider. So the comparison doesn't get nearly as rosy if you've negotiated some discounts.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5828.703

But the way that you can kind of estimate what those would be is if you switch it from Kafka 3AZ to Kafka 1AZ, that will reduce the inner zone networking dramatically and turn on the single zone consumer's flag. So the comparison doesn't look quite as good anymore.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5850.621

There you turn it one day retention. Turn it to one day retention and then the It goes to 86% savings versus 60% savings. So it's still big, but we understand that there are a lot of big Kafka workloads out there. And we're confident that if we can deliver 75, 80% savings, they don't always come out at 90% like that example does.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5873.153

But if we can deliver 75, 80% savings, it's a compelling enough reason for someone to There's a little bit of activation energy it takes to get people to do anything. And we're confident that that 75% to 80% cheaper thing is enough of that activation energy to get people to at least give us a shot.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5962.758

Yeah, we we all like at Workstream, we know that that's like us, a very important part of what we of what we do. But it's always easier to walk into a sales conversation with the hard facts numbers and not the. A lot of vendors use those exact attributes to describe, to attribute a lot of savings to their product, which is probably true.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

5987.508

But they feel a little bit more wishy-washy compared to the hard facts numbers. So that's why we lead with those in our pricing calculator. And obviously those are still things that we highlight when we're talking to potential customers to help them understand the value of the product. But we like to think of that as more like the icing on the cake stuff.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

600.45

And then when you get outside of observability and security, there's a lot of people that are building custom applications on top of Kafka, like an inventory management system for a warehouse where every time you want to keep track of the real-time status of everything going on in the warehouse, you might want to send

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

6013.656

And the cost savings is what we're promising them, basically. Everything else is just icing on the cake.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

6072.377

Yeah, this has been very fun. I was not expecting to talk about raising money at all during this conversation, but that was something that we spent a lot of time When you're building a company, you have to spend a lot of time thinking about strategic stuff that's not just writing code. And that one was a lot of back and forth with my co-founder and I about how we were going to do things.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

6091.71

And we're very happy with our direction now, but it took the input of a lot of people to arrive at this conclusion. And we're very thankful for those people that made themselves available for us. for learning more about commercial open source stuff because we had never really even considered it before and super important to learn along the way.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

616.616

messages to say, oh, this new batch of inventory has been added onto the shelves of the warehouse. I'm taking things out. And then you're computing some type of a live application based on that inventory data to say, you know that you need to replenish the stock when it goes below a certain amount. But you want to do that in real time so that you can react faster than just doing this once a day.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

682.279

So I think that there are probably two main criticisms that people have of Kafka. The first is that it's hard to run. Like as the operator, you have to have a lot of knowledge about how to run. use the open source project appropriately. And the second major issue is the cost.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

704.485

I'm sure we'll get into this, but the cost of running open source Kafka in the cloud, it's pretty high compared to what people expect it to be. If you think of it as a dump pipe, you would expect to pay dumb pipe type rates for it.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

721.193

But given the fact that it requires triply replicating the data onto local disks, and you'd have to pay most of the cloud providers are charging you money for interzone replication, you end up paying a lot more than you expect, even if you're just storing the data temporarily.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

739.507

If you're using open source Kafka in AWS, for example, the minimum cost for a highly available 3AZ setup for the cluster is 5.3 cents per compressed gigabyte written into the cluster. That's just to do the replication part. The storage part is all another story. It depends on how long you want to store the data for. You're like, if you're starting out, that's your baseline cost.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

763.246

It can get pretty expensive pretty quickly.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

787.612

That's definitely a thing that happens. I know of companies that do that, but just as the migration to public cloud over the last 10 years has only increased in velocity, essentially, that is becoming less and less commonplace. popular, because it is indeed hard.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

807.257

And it's even harder when it's in your own data center, as opposed to the cloud, where you can just ask for more disks, and you get them right away. The cost situation is a little different there, too, because typically, the way that you're provisioning network in your own data center would not end up with a per-gigabyte cost.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

824.068

You amortize everything over how much data you're transferring inside your data center, but you're buying it in terms of hardware and your per gigabyte rate if your traffic goes up, it doesn't correlate the same way linearly as it does with Amazon. But it's definitely still a thing people do, but it's less and less popular every day.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

846.206

Some people have strong opinions about the actual developer programming model of Kafka and that it's a little hard to use sometimes. I think that's less of a big deal these days as more tools have integrated with Kafka. It makes it even easier to use Kafka than there are some other systems that might have a theoretically easier to use programming model. But everything speaks Kafka now.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

872.294

So those concerns are mostly trumped by the fact that it's the de facto standard. I think really what most people are concerned about when, like if you don't use Kafka today and you're thinking about bringing it in to your company, the two things that you're going to be concerned about are how hard is it to run and how much is it going to cost? Those are typically concerns.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

890.734

people's two big blockers. It doesn't have anything to do with the fact that conceptually they have an issue with Kafka. It's those more practical things.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

912.699

It's a number of different things. I think the first one is yes, being responsible for anything that stores data on local disks, if you want to achieve high availability and high durability of your data, is challenging. It requires experienced SREs to, like... handle those types of failures when they do occur.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

935.345

But that, I think, can be dealt with because people do that with other systems all the time. But I think that most people's problems with Kafka come when they want to scale up and scale down the cluster in response to load. The open source project doesn't really give you much tooling when it comes to helping you manage that process.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

955.753

Like, for example, in the open source project, there's no automated tool to rebalance the data among the machines when you add or remove machines. That's kind of a table stakes feature in a lot of... If you're thinking about a distributed relational database, that would seem kind of silly if you had to run a script to move data between the nodes and the database.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

976.421

But that is true of open source Kafka. And there are now. There are other tools that you can use alongside of it that can take some of this work off of you. But they're not always the easiest to use either. It's not like a self-balancing, self-managing thing like a lot of the distributed relational databases are. It's something that takes a little bit more hands-on work.

The Changelog: Software Development, Open Source

Reinventing Kafka on object storage (Interview)

998.336

And another thing that goes along with that is if you're storing data for a long period of time, in the open source project. They didn't add a tiered storage feature until very recently in the open source project.