top of page

simplyblock and Kubernetes

Simplyblock provides high-IOPS and low-latency Kubernetes persistent volumes for your demanding database and other stateful workloads.

Confidential Computing with Moritz Eckert from Edgeless Systems

This interview is part of the Simplyblock Cloud Commute Podcast, available on Youtube, Spotify, iTunes/Apple Podcasts, Pandora, Samsung Podcasts, and our show site.


In this installment, we're talking to Moritz Eckert from Edgeless Systems, a company that build a Kubernetes distribution for secure and encrypted in-memory processing, about confidential computing, and how always encrypted must be the new trend to prevent data being encrypted on the wire, and at-rest but being plainly readable while processing.


Chris Engelbert: Hello everyone, welcome back to another episode. Today I have Moritz with me. Moritz is from Edgeless Systems, a really cool company. So welcome Moritz.


Moritz Eckert: Thank you Chris, great to be here.


Chris Engelbert: We're happy to have you. So maybe just start very quickly. Who are you? What is your background? And specifically, what is this cool technology from Edgeless Systems?


Moritz Eckert: Yeah, who am I? Yeah, I'm a German that studied computer science once in his life, got really deep into security. Computer security played a lot of capture-the-flag competitions and was excited about research in that area. Actually started off doing a PhD and doing research in security, more specifically like binary system security. And wanted to do a little pivot, do something outside of the research world and stumbled up on my colleagues these days, Thomas [Tendyck] and Felix [Schuster], who were about to fund a company called Edgeless Systems on the topic of confidential computing. And I had some touch points. I actually did a bachelor thesis in this area where I was looking more from the offensive side, the attacking side. But the idea of, yeah, a new technology, a deep tech startup in Germany and building some cool stuff really got me hooked. And I decided to join those two guys as their first employee. And that's where, at least for me, the story of Edgeles Systems started. 


I guess the pressing question is what is confidential computing? What is Edgeless doing? So confidential computing is a hardware-based technology or a term for generalizing this hardware-based technology where chips, specifically CPUs, have the ability to keep memory encrypted at runtime. I think that's the most prominent feature. And they also have a feature for doing a form of remote attestation, basically providing you with a form of verifying that this is exactly the CPU you expect to be there. Let's say, for example, this is an Intel CPU with this and that firmware that is currently running that application inside such an encrypted memory environment. And these are the main two new features that the processors or the hardware vendors introduced. And confidential computing is summarizing all of the tech that builds up on these features, essentially.


Chris Engelbert: Interesting. So attestation, in the sense of you can actually make sure nobody exchanged the CPU against something you would not expect?


Moritz Eckert: Exactly right. The CPU has a burned-in secret and basically uses that for signing some reports about itself and also about what's currently running in that support.


Chris Engelbert: Got it. So it's something along the lines of secure boot where you also have the attestation of the different stages of the boot process. Interesting.


Moritz Eckert: In a sense, yes.


Chris Engelbert: Alright, so from my understanding, Edgeless Systems Constellation is a Kubernetes distribution that enables you to have an always encrypted container. What does always encrypted mean and what does that give you specifically?


Moritz Eckert: So this is basically our take on confidential computing. First of all, we see this fundamentally as a cloud technology where there's a need for establishing trust because you run on a shared infrastructure in a remote place where it might be necessary, first of all, to isolate yourself against that infrastructure layer. And you want to establish some form of trust before processing any kind of sensitive data as one example. And because it's fundamentally for us a cloud technology, our focus area or our products are in this infrastructure layer where you want to enable cloud native applications to consume this technology. And yeah, that's basically why from the start our focus was very much on this Kubernetes cloud native application layer. And maybe with this focus of let's build these infrastructure tools, right?


Let's build the shovels for the gold rush so that you can build up these cool applications that consume the technology. And yeah, as you said, our main product these days is called Constellation, which is a Kubernetes distribution specifically for confidential computing or that specifically makes confidential computing available for your application. And we also call it an always encrypted Kubernetes for those people that are not that familiar with the term of confidential computing, always encrypted maybe is a bit more triggering. And always encrypted means, right, when you use Kubernetes these days, you might use some CNI network interface that does encryption on the wire, you have storage that implements encryption in one place or another. And now what we add is this in between encryption during processing the encryption in use, and thereby closing this logical gap so that when you have a Kubernetes cluster, or you have a containerized application that runs in the cloud, the data that flows through this application is encrypted through the entire time. So when it comes in, maybe over the network, it's being processed as being stored on disk for consistency at all times the data is encrypted. That's what this encryption always means.


Chris Engelbert: Right, right. So from an application developer or the application developers point of view, do I need to be aware of something? Do I need to build my applications slightly differently? Is there something like some overhead? I think you said it's implemented in hardware. So I guess the overhead wouldn't be too big, but it's something I have to be careful about?


Moritz Eckert:  Right. This is a very excellent question, because my fundamental belief is that we are very much deep down in the stack, somewhat like a foundation, but we should be almost invisible. If we build this right, it should be always invisible. Because I am an application developer, I don't want to worry about it. Similar to like, hopefully, I don't need to care about whether my storage is encrypted, or there's network encryption, I just want to deploy my application and consume this.


The first iteration of confidential computing technology, it was not quite there yet. It gets very technical, but essentially, the first iteration called Intel SGX was very much process based. So that means to do that, you would need to, or to consume that you wouldn't to adjust your application, you would have some effects on the application layer. And with these later generations, now, the focus is more on the virtualization layer, the hypervisor layer, they don't isolate and process your isolate essentially an entire VM. And this can now be applied in different ways. And with Constellation, we apply this, let's say, on the Kubernetes layer, that's why we have a Kubernetes distribution, where we isolate every Kubernetes node inside its own confidential VM. So when you deploy a container, it runs inside that confidential VM, the memory of that container during processing is automatically encrypted.


Um, yeah, we do some, of course, some more tricks and treats in different layers. So that not only is the memory encrypted, but we can also make use of this attestation feature so that in the end, when you create a Constellation cluster, you can do some meaningful verification about this is indeed a benign Constellation cluster that has integrity that when I deploy my application, I know I have this runtime environment that is isolated from the cloud and fundamentally the cloud provider, which is probably the most important feature. So yeah, as an application developer, I don't really need to take care. For me, it's just like any other Kubernetes. In fact, it's CNCF certified in the sense that it fulfills the CNCF Kubernetes conformance tests, which is not surprising, because even though we are a Kubernetes distribution, we don't modify the Kubernetes itself, right inside our confidential VMs inside the isolated environment, there are the vanilla Kubernetes components running. So we run the actual release artifacts from the Kubernetes project itself. So it's not surprising that we fulfilled that.


Chris Engelbert: So that means you are a CNCF certified Kubernetes distribution, which I think is important for many people that actually need to run their own Kubernetes clusters. And that makes total sense. Yeah. When I install that, do I install it in a cloud and a private cloud on prem? Anything? 


Moritz Eckert: I mean, the goal here is, of course, anything. There are multiple arguments for Kubernetes or strong, strong points on multiple Kubernetes distributions. For us, of course, it's the confidential computing aspect. So the primary focus is the public cloud. Currently, you can go through the three hyperscaders and you can create Constellation clusters. What needs to be there is this hardware layer, you need the hardware features and you need to have them exposed so that we can consume them to create our confidential computing environments. And we have that in the hyperscalers. You can, of course, also do that on prem, there might be different reasons to do so. It's not the typical cloud case, then I guess, but yeah, you can do that as well. And this is, I guess, where the most touch points are with constellations. So the actual handling of the Kubernetes distribution. And you can use constellations standalone. But I think that's probably one of the more interesting points in terms of any kind of compromises you have to make. So we try to make your life as easy as possible. And we don't get around having our own distribution, due to the fact that this should be an isolated environment. It's not a way to offer this in a managed way. So one thing, of course, is integrations like terraform, infrastructures as code. Yeah, so you can plug and play that into your code base. And then other directions are integrations, like, I would call them meta orchestrators, like let's say, a SUSE Rancher, or there might be others out there. And of course, integrations into these kinds of toolings are also something we strive towards, but we're not quite there yet.


Chris Engelbert: Right, right. So you said in cloud providers, I guess you can deploy it from marketplaces as the easiest solution.


Moritz Eckert: Yes, marketplaces are the easiest way. And it also offers us to provide this in a dynamically built way, right? So you get automatically built for only as much as you consume.


Chris Engelbert: Right, right. And you also mentioned that you need to make sure that the hardware is or the hardware capability is actually exposed. I think for when you use the marketplace installation, that's probably easy because you only provide or only offer the different options that are available. But are there like older instances where this capability is not available or older systems? Or is that something which is slowly fading away and it's not going to be an issue in the next month or so?


Moritz Eckert: No. They're currently not not all machine types that have that feature available. It's still relatively new. But I'd say that for AMD, the latest generation has existed, I think, since the last two generations. With Intel, it's being rolled out with the last generation. So it's something that will with the upcoming releases be probably be available in almost all instance types. But as of now there are specific instance types. You need to select those instance types for creating the cluster.


Sure, as you say, the marketplace makes this straightforward. Depending on the client side tooling. It's also fairly simple, or, of course, well documented also by the cloud providers.


Chris Engelbert: Just in case somebody, I don't know how broadly this is used right now. You mentioned AMD and Intel, is ARM supported?


Moritz Eckert: That's a great, great question. And ARM is something people, of course, ask a lot.


So there is an ARM specification for confidential computing called the CCA, the confidential computing architecture. So far, it was not released as silicon. So the specifications are that nobody has licensed and built a chip based on that. But very interestingly, there will be so we will organize an online conference next week for confidential computing called the OC3 [https://www.oc3.dev]. And there will be a big talk about ARM and ARM CCA from mostly ARM folks. And there will be talks from some of the cloud providers. And they will present the current status and when things are getting started, yeah, started with silicon. So yeah, if that's an interesting topic for some listeners might be interested to listen to this talk at the OC3, which is free for sign up.


Chris Engelbert: Perfect. Well, we'll put it in the notes. People will find it. It's always easier to just give somebody a link. Anyways, that is actually interesting. I think I have to sign up myself. That sounds really, really interesting.


I've done a little bit of secure computing for embedded devices in the past. That's why I know the attestation for secure boot and similar systems. So that is certainly something along my lines as well.


All right. Let me see. We think it is like the most important trend right now when you look at something like Kubernetes as a whole or specifically like the computing space or the secure computing space you're in.


Moritz Eckert: Yeah, very good question. I think this space has so much velocity that so many things are happening. One thing I would definitely see is that all of this AI generative AI large language model thing is not passing us entirely. I think it's hitting us full front in all kinds of capacities. But of course, we also get asked, okay, now what about confidential computing in terms of AI, in terms of GPUs? Because there's a very interesting use case, right? All of these people want to consume things like ChatGPT. But do you provide all of your data to ChatGPT? Maybe in your personal life, but can you do that in an enterprise context? What about the public sector? And yeah, lots of questions. And that's where we see a lot of things getting moving.


Nvidia has in fact released the H100, which is their, I think, still latest chip. They released the same or more or less the same features as with the CPUs also for the GPUs you have attestation and you have the runtime isolation encryption for the GPU as well. So yeah, that's something where we are very busy. How can we make that available to both votes? So yeah, you can, let's say you can build a confidential ChatGPT in a way to say it in very broad terms. But that's definitely something. And I believe this, this is just my view, but the whole AI space is also super interesting for the


Chris Engelbert: I think that makes a lot of sense. Especially because you said, right, if you need to analyze that data, do you want to have it encrypted everywhere, except for when you actually process it. But on the other hand, it's interesting that you said AMD has had it in their CPUs for a long time, but it seems nobody thought about the graphics cards yet.


Moritz Eckert: Right. It's interesting, for sure.


Chris Engelbert: So one last question, because we're already running out of time, like, what do you think is like the most overlooked workload, or type of workload when you move to the cloud? Or what do you think is mostly overlooked in workloads? Let's put it that

way. And don't don't say encryption, because that's obvious.


Moritz Eckert: No, I mean, this is a very difficult question. There are a lot of things you could name here. Yeah, I could give so many philosophical answers here. But I think one thing I see right now when we when we talk about cloud migration stuff, we are very much at this infrastructure layer, we are very much at this, let's say the original layer still of cloud, where, as if we look at the cloud providers, we're already talking about like PaaS services, of course,

SaaS services, everything can be consumed in a managed way. And yet, I believe, there are very interesting discussions in the area. Where do I want to go in that range, right? Do I want to use plain infrastructure service? You want to use as much management as possible, because it reduces the cost I need on my side, in terms of expertise in terms of building stuff, in terms of costs, I don't know. But also, I lose a bit, maybe a bit of control, I lose a bit of in house knowledge. That's definitely an interesting triangle, where moving in one direction or the other has certain implications on the other side.


Chris Engelbert: All right. That was fun. As I said, we're unfortunately out of time. 20 minutes is so short. Anything else you want to add to that? Anything you feel you have to give away.


Moritz Eckert: No, I hope maybe some listener found this insightful. I can just repeat the OC3, if you're interested in the topic, I think that's a good place to start. Get a broad overview from all of the different players like cloud providers, open source vendors, as we are hardware vendors. Yeah, lots of stuff to explore.


Chris Engelbert: Awesome. Well, we'll put your contact details in the show notes. I am not sure if it will be out before the OC3. Okay. 


Moritz Eckert: If not, everything will be recorded on YouTube. Still probably a good place to start.


Chris Engelbert: If somebody wants to meet you and talk to you about it, but you're probably being at different conferences. And as I said, we put contact details for you. So people can just write you a mail or ask any question.


All right. Thank you very much. It was lovely having you. I still have a lot of questions. You may have to come back at some point.


Moritz Eckert: No, thank you. Thank you very much for having me. It was great chatting.


Chris Engelbert: All right. Thank you very much, people. I'm looking forward to see you next week. And we'll see you again.



bottom of page