This interview is part of the Simplyblock Cloud Commute Podcast, available on Youtube, Spotify, iTunes/Apple Podcasts, Pandora, Samsung Podcasts, and our show site.
In this installment of the podcast, we talked to Steven Sklar (his private blog, X/Twitter) from QuestDB, a company producing a time series database for large IoT, metrics, observability and other time-component data sets, talks about how they implemented their database offering, from building their own operator to how storage is handled.
Chris Engelbert:Â Hello, everyone. Welcome back to another episode of simpleblock's Cloud Commute Podcast. Today, I'm joined by Steven Sklar from QuestDB. He was recommended by a really good friend and an old coworker who's also at QuestDB. So hello, Steven, and good to have you.
Steven Sklar:Â Thank you. It's really a pleasure to be here, and I'm looking forward to our chat.Â
Chris Engelbert:Â All right, cool. So maybe just start with a quick introduction. I mean, we already know your name, and I hope I pronounced that correctly. But what else is to talk about you?
Steven Sklar:Â Sure. So I kind of have a nontraditional background. I started with a degree in economics and finance and worked on Wall Street for a little bit. I like to say in most of my conference talks on the first slide that my first programming language was actually Excel VBA, which I do still have a soft spot for. And I found myself on a bond trading desk and kind of reached the boundaries of Excel and started working in C# and SQL Server, realized I liked that a lot more than just kind of talking to people on the phone and negotiating over various mortgage bonds and things. So I moved into the IT realm and software development and have been developing software ever since. So I moved on from C# into the Python world, moved on from finance into the startup world, and I currently am QuestDB, as you mentioned earlier.
Chris Engelbert:Â Right. So maybe you can say a few words about QuestDB. What is it? What does it do? And why do people want to use it?
Steven Sklar:Â Sure. QuestDB is a time series database with a focus on high performance. And I like to think of ease of usability. So we can ingest up to like millions of rows per second on some benchmarks, which is just completely mind-blowing to me. It's actually written primarily in Java, which doesn't necessarily go hand in hand with high performance, but we've rewritten most of the standard library to avoid memory allocation. So I know it actually truly is high performance. We've actually been introducing Rust as well into the code base. You can query the database using plain old SQL. And it really fits into several use cases, like financial tick by tick data and sensor data. I have one going on in my house right now, collecting all of my smart home stuff from Home Assistant. And I mean, yes, I've been here for around a year and a half, I want to say. And it's been a great ride.
Chris Engelbert:Â Right. So you mentioned time series. And I'm aware what time series are because I've been at a competitor before that. So Jaromir and I went slightly different directions, but we both ended up in the time series world. But for the audience, that may not be perfectly aware what time series are. You already mentioned tick data from the financial background. You also mentioned Home Assistant and IoT data, which is great because I'm doing the same thing for me. It's most like energy consumption and stuff. But maybe you have some more examples.
Steven Sklar:Â Sure. Kind of a canonical one is monitoring and metrics. So any kind of data, I think, has a time component. Because it's and I think you need a specialized database. A lot of people ask, well, why not just use Postgres or any of the common databases? And you could, but you're probably not going to scale. And you're going to hit a point where your queries are just not performing. And time series databases, in many cases, ours in particular as well, I can speak to, is a columnar database. So it stores data in a different format than you normally would see in a traditional database. And that makes querying and actually ingesting data from a wide range of sources much more efficient. And you kind of like to think of it as, I don't want to put myself on the spot and do mental math. But imagine if you have 10,000 devices that are sending information to your database for more than a second. It's not that big of a deal. But maybe, let's say, you scale and you end up with a million devices. All of a sudden, you're dealing with tremendous amounts of data going into your database that you need to manage. And that's a different problem, I think, than your typical relational database.
Chris Engelbert:Â Right. And I think you brought up a good example. Most of the time when we talk about devices as I said, I'm coming from a kind of similar background. It's not like a device just sends you a single data point. When we talk about connected cars, they actually send thousands to 100,000 of data, position information, all kinds of metrics about the car itself, the electronics, and all that kind of stuff. And that comes down to quite a massive amount of data. So yeah, I agree with you. An actual time series database is super important. You mentioned columnar storage. Maybe you can say a few words about how that is different from, I guess, your Excel sheet.
Steven Sklar:Â Sure. Well, I guess I don't know if I can necessarily compare it to my Excel spreadsheet, since that's its own weird XML format, of course. But columnar data, I guess, is different from, let's say, tabular data in your typical database. Tabular data is generally stored in the table format, where all of your columns and rows are kind of stored together versus columnar in a data store, each column is its own separate file. And that kind of makes it more efficient when you're working in a time component, because time is generally your index. You're not really indexing on a lot of things like primary key type things. You're really just mostly indexing on time, like what happened at this point in time or over this time period. Because of that, we're able to optimize the storage model to allow faster querying and also ingestion as well. And just for clarity, I'm not a core developer. I'm more of a cloud guy, so I hope I got those details right.
Chris Engelbert:Â I think you get the gist of it. But for QuestDB, that means it still looks like a tabular kind of database. So you still have your typical tables, but the individual columns are stored separately. Is that correct?
Steven Sklar:Â Correct.Â
Chris Engelbert:Â Ok, cool. So you said you're a cloud guy. But as far as I know, you can install QuestDB locally, on-prem. You can install it into your own private cloud. I think there is the QuestDB cloud, which is the hosted platform. Well, not I guess. I know that it is. So maybe what is special about that? Does it have special features? Or is that mostly about the convenience of getting the managed database and getting rid of all the work you have to do when you run your own database, which can be complicated.
Steven Sklar: Absolutely. So actually, both. Obviously, you don't have to manage it, and that's great. You can leave it to the experts. That's already worth the price of admission, I think. Additionally, we have the enterprise, the QuestDB enterprise, which has additional features. And all of those features, like role-based authentication and replication that's coming soon and compression of your data on disk, are all things that you get automatically through the cloud. Â
Chris Engelbert:Â Ok, so that means I have to buy QuestDB enterprise when I want to have those features on prem, but I get them on the cloud right away.
Steven Sklar:Â Correct.
Chris Engelbert:Â Ok, cool. And correct me if I'm wrong, but I think from a client perspective, it uses the Postgres protocol. So any Postgres client is a QuestDB client, basically.
Steven Sklar:Â Absolutely, 100%.
Chris Engelbert:Â All right, so that means as an application developer, it's super simple. I'll basically drop in QuestDB instead of Postgres or anything else. So yeah, let's talk a little bit about the cloud then. Maybe you can elaborate a little bit on the stack you're running on. I'm not sure how much you can actually say, but anything you can share will probably be beneficial to everyone.
Steven Sklar:Â Oh, yeah, no problem. So we run on AWS. We run on Kubernetes. And we also-- I guess one thing that I'm particularly proud of is an operator that I wrote to orchestrate all these databases. So our model, which is not necessarily your bread and butter Kubernetes deployment, is actually a single-tenant model. So we have one database per instance. And when you're running Kubernetes, you kind of think, why do you care about what nodes you're running on? Shouldn't all that be abstracted away? And I would agree. We primarily use Kubernetes for its orchestration. But we want to avoid the noisy neighbor problem. We want to make it easy for users to change instances and instance types quickly. We want users to be able to shut down their database. And we still have the volume. So all these things, we could orchestrate them directly through Kubernetes. But we decided to use single-tenant nodes for that.
Chris Engelbert:Â Right. So let me see. So that means you're using Kubernetes, as you said, mostly for orchestration, which means it's more like if the database for some reason goes down or you have to have maintenance or you want to upgrade. It's more about the convenience of having something managing that instead of doing it manually, right?
Steven Sklar:Â Exactly. And so I think we really thought, ok and this is a little bit before my time, but you could always roll your own cluster. But there's so many things that are baked into Kubernetes these days, like monitoring and logs and metrics and networking and DNS and all these things that I don't necessarily want to spend all my time on. I want to build a product. And by using Kubernetes and leveraging those components, we were able to build the cloud incredibly quickly, get us up and running, and then now expand upon it in the future. And that's why, again, I mentioned the operator earlier. That was not originally part of the cloud. The cloud still has in a more limited capacity what we call a provisioner. So basically, if you're interacting with the cloud and you make a new database, basically send a message to a queue, and that message will be picked up by a provisioner. And previously, that provisioner would say, ok, you want a database. Let's make a stateful set. Let's make a persistent volume. Let's make these networking policies. Let's do all of these things. If there's an error, we can roll back. And we have retries. So it's fairly sophisticated. But we ended up moving towards this operator model, which instead of the provisioner managing each of these individual components, it just manages one QuestDB resource. And our operator now handles all of those other little things. So I think that's much more flexible for us in terms of, A, simplifying the provisioner code, and also by adding new features instead of having to work in this ever-growing web of Python. Now, it's really just adding a snippet here and there to our reconciliation inside of everything.
Chris Engelbert:Â Right. You mentioned that the database is mostly written in Java. Most operators are written in Go. So what about your operator? Is it Java?Â
Steven Sklar:Â It's Go.Â
Chris Engelbert:Â That's fair. To be honest, I think the vast majority is. So you mentioned AWS. But I think that you are mostly talking about QuestDB Cloud, right? I think from a user's perspective, do I use a helm chart or do I also use the operator to install it myself?
Steven Sklar:Â Yes. So the operator is actually only limited to the cloud because it's built specifically to manage our own infrastructure with our own assumptions. We do have a helm chart and an open source image on Docker Hub. So I've used that plenty of times more than I can count.
Chris Engelbert:Â Ok, fair enough. So you basically support all cloud environments, all on-premise. But when you go for QuestDB Cloud, that is AWS, which I think is a fair decision. It is the biggest environment by far. So from a storage engine perspective, how much can you share? Can you talk about some fancy details? Like what kind of storage do you use? Do you use the local NVMe storage attached to the virtual machine or EBS volumes?
Steven Sklar:Â Yeah. So in our cloud, we have both actually NVMe and EBS. Most customers end up using EBS. And honestly, EBS is simpler to provision. But I do want to actually talk about some cool stuff that we've done with compression. Because we actually never implemented our own compression algorithm. We're running on top of ZFS and using their compression algorithm. And we've actually-- there's an issue about data corruption, potentially, using mmap on ZFS, or rather a combination of mmap and traditional sys calls, the pwrite and preads. And what we do is actually identify when we're running on ZFS and then decide to only use mmap calls to avoid this issue. And I think what we've done is pretty cool also on the storage side of orchestrating this whole thing. Because ZFS has its own notion of snapshots, its own notion of replication, its own notion of ZPools. And to simplify things, again, because we're running this kind of I don't necessarily want to say antiquated, but we're running a single-tenant model, which might not be in vogue these days. What we actually do is we create one ZPool per volume and throw our QuestDB on the ZPool, enabling compression. And we've written our own CSI storage driver that sits in the middle of Kubernetes and other cloud providers so that we're able to pass calls onto the cloud providers if, let's say, we need to create or delete a volume using the cloud provider API. But when it comes to mounting specific ZFS and running ZFS-related commands, we actually take control of that and perform that in our own driver. I don't know when this is going to be released, but I'm actually talking about this in Atlanta next week.
Chris Engelbert:Â No. Next week is a little bit early. Currently, I'm doing a couple of recordings, building a little bit of a pipeline. Because of conferences, the same thing will be in Paris for KubeCon next week. So there is a little bit. No, I don't know the exact date. I think it's in three or four weeks. So it's a little bit out. But I guess your talk may be recorded. And public by then. So if that is the case, I'm happy if you drop it over and I put it into the show notes, people will love that. So you said when you run on, or when you detect that you run on ZFS, you use mmap. So you basically map the file into memory. And you change the memory positions directly. And then you fsync it. Or how does it work? How do I have to think about that?
Steven Sklar:Â Oh, boy. Ok. This is getting a little out of my-- So you always use mmap regardless. But the issue is when you combine mmap with traditional sys calls on ZFS. And so what we do is we basically turn off those other sys calls and only use mmap when we're writing to our files. In terms of the specifics of when we sync and things like that, I wish I could answer it right off of the bat.
Chris Engelbert:Â That's totally fine. So just to sneak in a little shameless plug here, we should totally look into getting QuestDB running on simplyblock. I think that could be a really interesting thing. Because you mentioned ZFS, it's basically ZFS on steroids. ZFS from my perspective, I mean, I'm running a ZFS file server in the basement. It saved me a couple of times with a broken hard disk. It's just an incredible piece of technology. I agree with that. And it's interesting because I've seen a lot of people running database on ZFS. And ZFS is all about reliability. It's not necessarily about the highest performance. So it's interesting you choose ZFS and you say, that's perfect and works great for us. So because we're almost running out of time, as I said earlier, 20 minutes is super short. When you look at cloud and databases and the world as a whole, whatever you want to talk about, what do you think is the next big trend or the current big trend? What is coming? What do you think would be really cool?
Steven Sklar:Â Yeah. So I guess I'm not going to talk about the existential crisis I'm having with Devin and the AI bots because it's just a little depressing for me right now. But I think one thing that I've been seeing over the past few years that I find very interesting is this move away from cloud and back into your own data center. I think having control over your data is something that's incredibly important to basically everyone now. And I think it's to find a happy medium as a DevOps engineer between all the wonderful cloud APIs that you can use and going in the server room and kind of hooking things up. There's probably a happy medium there somewhere that I think is an area that is going to start growing in the future. You see a lot of on-prem Kubernetes type things, Kubernetes on edge maybe. And for me, it presents a lot of interesting challenges because I spent most of my career in startups working on the cloud and understanding the fundamentals of not just the cloud APIs but operating systems and hardware a little bit. And so kind of figuring out where to draw that line in terms of what knowledge is transferable to this new paradigm will be interesting. And I think that's a new trend that I've been focused on at least over the past couple of months.
Chris Engelbert:Â That is interesting that you mentioned that because it is kind of that. When the cloud became big, everyone wanted to move to the cloud because it was like "cheaper" in air quotes. And I think-- well, the next step was serverless because it is yet even cheaper, which we all know is not necessarily true. And I see kind of the same thing. Now people realize that not every workload actually works perfectly or is a great fit for the cloud and people slowly start moving back or at least going back to not necessarily cloud instance but co-located servers or virtual machines, like plain virtual machines and just taking those for the workloads that do not need to be super scalable or super elastic.
Well, thank you very much. That was very delightful. It was a pleasure having you.Â
Steven Sklar:Â Thank you.
Chris Engelbert:Â Thank you for being here and for the audience. I hope to-- well, not see you, but hear you next time, next week. Thank you very much.
Steven Sklar:Â Thank you. Take care.
Â
Comments