Skip to main content

Chris Engelbert Chris Engelbert

Simplyblock Replaces Your VMware and Database Architecture

Mar 25, 2026  |  16 min read

Last edited: Mar 31, 2026

Simplyblock Replaces Your VMware and Database Architecture

The acquisition of VMware by Broadcom sparked a massive shift away from VMware-centric infrastructures, fueled by fears of rapidly rising costs. While cost is a good initial motivator, the chance for a modernization overhaul of the data infrastructure shouldn’t be overlooked.

For almost two decades, running databases inside VMware virtual machines has been the default choice in enterprise infrastructure. It delivered predictability, isolation, and a well-understood operational model. If you were running PostgreSQL, MySQL, or Oracle, chances are it lived inside a VM.

That model worked. But modern requirements have changed. Platform engineering, Kubernetes adoption, and the rise of API-driven infrastructure have exposed a fundamental limitation: the traditional VMware + database stack was never designed for how databases are consumed today.

This is no longer just about cost, licensing, or vendor strategy. It is about architecture. And more specifically, whether your database layer is still bound to infrastructure decisions that no longer fit modern workloads.

The Legacy Stack: VMware and Database as a Coupled System

The traditional enterprise stack follows a familiar pattern. VMware front-to-back. A hypervisor layer, such as vSphere or ESXi, provides compute abstraction. Storage management is handled by vSAN or external SAN systems, while networking is managed by NSX. On top of that stack, a virtual machine with its own operating system, and finally, the database. In the worst case scenario, the installation was manual. In the best case, it was automated through Terraform or Ansible.

This architecture is tightly coupled. The database is bound to the lifecycle of the virtual machine, which in turn is bound to the underlying infrastructure.

Scaling commonly requires resizing the VM, which introduces downtime and operational complexity. That means that performance is constrained by how storage and compute are provisioned within that VM boundary.

The operational model reinforces this tight coupling. A typical workflow involves provisioning a VM, configuring storage, installing an OS, installing the database, and then manually scaling as demand grows. These environments are designed to be long-lived, rarely changed, and stateful. Once deployed, they are almost never recreated from scratch because doing so is costly and risky.

As a result, the database architecture becomes dependent on hypervisor constraints. The infrastructure decision dictates what is possible at the data layer. This is an inversion of control, and it’s bad.

Instead of the VMware infrastructure layer serving the database, the database is constrained by the infrastructure decisions.

The consequences are clear: limited portability, high operational friction, and increasing vendor lock-in.

VMware Stack Limitations for Modern Database Workloads

Let’s put it simply. At its core, VMware is not a database platform. It just provides infrastructure primitives: compute, storage, and networking. That’s it.

Database management, provisioning, scaling, and lifecycle operations remain the responsibility of platform teams. With more or less automation and hands-on processes.

Even with VMware’s Tanzu, this doesn’t fundamentally change. While it uses containers for workloads, they still run inside VMs. The underlying dependency on virtual machines remains. The database is still tied to infrastructure constructs that were not designed for modern data workflows.

Operationally, this leads to inefficiencies. Provisioning a database can take minutes to hours, depending on internal processes. Compute and storage are often massively overprovisioned to handle peak demand, and workflows are commonly ticket-driven with manual intervention from operations teams.

Performance Issues and Lack of Data Services

Performance is another limitation. Traditional VMware storage stacks are not designed as NVMe-first, distributed systems. They don’t natively provide ultra-low latency storage at sub-millisecond levels, nor do they deliver IOPS scalability required for modern data workloads and offered by datacenter NVMe devices.

That means scaling resources typically involves resizing VMs, which introduces reboots, downtime, or disruption.

The same goes for the data services. It is up to the platform team to select, integrate, and operate backup, replication, and disaster recovery tools. The landscape is fragmented and typically requires third-party tools or integrations. There is no unified, built-in data plane for managing these capabilities. And that is just the bare VM infrastructure.

From a database perspective, the gaps are even more apparent. Environments are long-lived by default. Creating short-lived or ephemeral databases is operationally complex and typically unsupported. The concept of database branching is unknown due to the impracticality of short-lived environments or the cost and time required for data cloning.

Furthermore, there is no native, API-driven database lifecycle. Any automation must be built on top of infrastructure primitives, often with custom tooling, such as Terraform or shell scripts.

Ultimately, VMware remains infrastructure-centric. It does not provide a database-centric model.

Simplyblock: Foundation for a Cloud-Native Data Architecture

Simplyblock, together with Vela, introduces a fundamentally different approach. Instead of building databases on top of infrastructure, they provide a layered, modern, cloud-native data architecture that aligns well with platform engineering standards.

At the top, an API, UI, and CLI expose the system as a programmable platform. Beneath that, Vela acts as the database control plane, orchestrating PostgreSQL-based services and enabling self-service consumption.

Simplyblock provides the storage plane, delivering NVMe-first, distributed block storage with seamless scalability. Kubernetes underpins the entire system, orchestrating deployment across platforms such as OpenShift, Talos, or Rancher.

This architecture shifts the abstraction layer. Instead of interacting with machines, users interact with database services. Instead of tightly coupled components, each layer is decoupled and independently scalable. It combines the convenience of managed cloud services with the power of full self-control.

The result is a full-stack replacement for the traditional VMware + database model. It is not just a different infrastructure. It is a different way to deliver and consume data services. It aligns with a broader shift toward cloud-native, API-driven systems and serves as a modern data foundation for organizations moving away from VMware.

Decoupling Compute, Storage, and Database Lifecycle

In the VMware model, compute, storage, and the database are bundled together inside a virtual machine. This creates rigid dependencies and limits flexibility, and is impossible to break apart.

That’s why simplyblock separate these concerns. Storage becomes a persistent, shared, distributed resource that is independent of compute. On the other hand, compute becomes ephemeral and delivered through lightweight MicroVMs. The above database’s lifecycle is managed through APIs or a user interface, allowing environments to be short-lived or long-lived depending on the use case. That perfectly covers environments with production databases and staging branches.

Image 1: From VM-bound databases to platform-native data services

This decoupling is enabled by several technologies, with simplyblock’s copy-on-write storage allowing for instant cloning without duplicating data. It’s distributed block storage over NVMe-oF provides high-performance and scalable access to data from anywhere in the cluster. And MicroVMs enable lightweight, isolated compute environments while Kubernetes orchestrates the entire system.

Because of this separation, compute and storage can scale independently. New database environments can be created instantly without copying any data, and cloning becomes a metadata operation rather than a heavy data transfer.

This fundamentally changes how databases are provisioned, managed, and operated.

The Kubernetes-Native Infrastructure Foundation

Simplyblock, combined with Vela, provides a ready-to-use data stack built on a Kubernetes-native foundation that runs directly on bare metal. And storage, databases, containers, and virtualized workloads can coexist within the same platform.

This is a key difference from VMware. Tanzu workloads still operate within virtual machines, inheriting their limitations. There is no equivalent bare-metal Kubernetes integration that unifies all workloads under a single control plane.

With simplyblock, Kubernetes becomes the operating system for everything. Storage integrates natively through CSI. Databases and other services share a unified infrastructure layer. Platform teams can define policies and guardrails centrally.

This aligns the architecture stack with modern platform engineering practices and integrates naturally with enterprise Kubernetes distributions such as OpenShift and Rancher. The result is an enterprise-ready, cloud-native stack designed for both traditional and modern workloads.

NVMe-First Distributed Storage Layer

Simplyblock’s storage engine is the critical differentiator. It provides NVMe-first, distributed block storage designed for high-performance, low-latency, stateful workloads. That means: databases and data services, such as AI-centric workloads.

Latency is measured in microseconds, not milliseconds. IOPS scale across distributed volumes without the limitations of node-bound architectures. Enterprise-level features such as thin provisioning, data tiering, and replication are built into the platform.

Snapshotting and cloning are efficient and instant, enabled by its copy-on-write mechanisms. Additionally, backups can be integrated with S3-compatible storage for off-site protection, and storage-level cross-site replication enables near-zero RTO failover for databases and aligned services.

This turns storage into a programmable resource. It is no longer a static allocation tied to a VM. Instead, it becomes an API-driven layer that can be dynamically consumed and managed.

These capabilities enable higher-level database features such as branching and instant, non-impact snapshots.

In contrast, traditional VMware storage solutions do not offer the same level of performance, flexibility, integration, or usability.

Your Database as a Service

One of the most significant changes is how databases are consumed. Instead of provisioning a virtual machine and installing PostgreSQL, users request a database through an API or UI.

The provisioning itself is a lightweight and fully automated operation. Databases can be created, scaled, or deleted on demand. Workflows become API-first, enabling automation and integration into CI/CD pipelines.

This model is also a perfect fit for agent-driven operations. By default, databases are isolated, ephemeral, and safely separated without risking production systems. That means short-lived environments can be created for testing, analytics, or experimentation and removed with one click (or API call) when no longer needed.

At the same time, Vela supports a range of workloads, including transactional, analytical, vector- or AI-based, graph, and time-series use cases. This flexibility is delivered as a self-service layer, not as infrastructure that must be manually configured.

VMware does not provide an equivalent capability. Even with additional tools, the database remains an application running inside a VM. No native database platform abstraction is possible.

Ephemeral Compute with MicroVMs

As explained above, traditional virtual machines are designed for long-lived workloads. They have relatively slow startup times, require initial provisioning, and consume significant resources even when idle.

The modern stack uses MicroVMs to address these limitations. They start in seconds, require no lengthy setup, and can scale resources dynamically without interruption or downtime. Furthermore, they support scale-to-zero behavior, in which compute is allocated dynamically and only when needed.

This enables rapid scaling and resizing of database environments. It also increases workload density, enabling more efficient use of hardware while maintaining strong isolation and significantly reducing overhead.

This makes it a perfect fit for ephemeral and persistent database workloads. Environments can be created for short-term use and then discarded. This is particularly valuable for high-iteration workflows, including automated testing and agent-driven experimentation.

Operational Model Shift from Tickets to APIs

All of this means one thing: the operational model changes just as dramatically as the architecture. The most immediate change is how databases are requested, provisioned, and managed on a day-to-day basis.

The typical VMware-based environment sees database provisioning as an operational workflow. A developer or team submits a request, typically through a ticketing system. The request is then reviewed, and resources are allocated. A VM is provisioned, the operating system is configured, storage is attached, and the database is installed and hardened. Depending on internal processes, this can take anywhere from hours to days. Follow-up changes, such as resizing storage or adjusting compute, require additional requests and coordination. It’s a process asking for friction at every step.

With the ready-to-use, combined simplyblock and Vela stack, this process becomes API-driven and self-service. Instead of submitting a request, users interact directly with the platform. A database can be created through an API call or a UI action in seconds. Configuration, such as compute size, storage characteristics, or replication policies, is defined declaratively at creation time. When the environment is no longer needed, it can be deleted just as easily, with resources immediately reclaimed.

Image 2: From ticket queues to self-service database platforms

This fundamentally changes the role of platform teams. They shift from executing requests to defining guardrails. These include policies for resource limits, access control, data protection, and compliance. Once defined, these guardrails are enforced automatically by the platform, allowing users to operate independently within a controlled environment.

The impact on delivery speed is significant. Development teams no longer need to wait for infrastructure. They can create isolated environments on demand, test changes against realistic datasets, and iterate without coordination overhead. This is particularly important in modern workflows that rely on continuous integration, automated testing, and rapid experimentation.

This reduces operational overhead, accelerates delivery cycles, increases autonomy for developers and data teams, and frees time for other important platform engineering tasks.

Finally, this shift enables entirely new usage patterns. Automated systems, including AI-driven agents, can provision and interact with databases directly. They can create short-lived environments, run experiments, and tear them down without human involvement. This would be impractical in a ticket-based system.

Business and Engineering Impact

The shift from a VM-centric database model to a decoupled, API-driven architecture has a direct and measurable impact on engineering workflows, platform teams, and infrastructure efficiency.

At the performance layer, the combination of NVMe-first distributed storage and decoupled compute removes many of the bottlenecks inherent in VM-based systems. Which means that databases are no longer constrained by the I/O limits of a single node or the overhead of hypervisor-managed storage stacks. Instead, they operate on distributed volumes that scale IOPS horizontally while maintaining low latency. This allows organizations to extract more performance from the same underlying hardware, rather than compensating through overprovisioning.

With ephemeral compute and thin-provisioned storage, resources are allocated dynamically. Compute can scale up, scale down, or scale to zero depending on workload demand. Storage is shared and deduplicated through copy-on-write mechanisms, which reduces duplication across environments. This allows for better hardware utilization, which increases because resources are allocated on demand rather than overprovisioned. Service density rises, allowing more workloads to run on the same infrastructure.

From a development perspective, the impact is even more pronounced. The ability to create database environments instantly changes how developers and engineering teams work. Instead of coordinating access to shared environments or waiting for infrastructure provisioning, developers can spin up isolated databases on demand. These environments can mirror production data through snapshot-based cloning, enabling realistic testing without risk.

Image 3: Branching brings proven development workflows to databases

This introduces a new level of safety. Ephemeral environments reduce the blast radius of failures. Bugs, misconfigurations, or unintended destructive actions are contained within isolated instances. In workflows involving automation or AI-driven systems, this becomes critical. Systems and humans can experiment, iterate, and even fail without fears of affecting production data.

One of the biggest benefits is with platform teams. Operational overhead is reduced significantly. They no longer need to manually provision and manage individual database instances. Instead, they define policies for resource allocation, security, and lifecycle management. These policies are enforced automatically through the platform, eliminating repetitive tasks and reducing the potential for human error. Finally, platform teams have time to focus on what really matters to their company.

Cost predictability improves as licensing complexity is reduced and infrastructure is used more efficiently. Organizations gain better control over their resources and avoid unnecessary overhead.

Finally, this architecture aligns with emerging workload patterns. AI and agent-driven systems require rapid provisioning, isolation, and scalability. They often involve short-lived tasks, parallel experimentation, and dynamic resource allocation. A VM-centric model struggles to support these requirements efficiently. A decoupled, API-driven platform is inherently better suited to these workloads, providing the flexibility and responsiveness they demand.

From Infrastructure Replacement to Architecture Modernization

It is tempting to view this transition as a simple replacement for VMware. Replace the hypervisor, reduce costs, and continue operating as before. But that approach does not address the underlying limitations of the existing model.

The real issue is not the choice of hypervisor. It is the architectural assumption that databases should be tied to virtual machines in the first place.

VMware represents a generation of infrastructure abstraction that focuses on decoupling applications from physical hardware. That was a necessary and transformative step. However, it stopped at the infrastructure layer. Databases remained tightly coupled to the environments in which they were deployed.

Simplyblock and Vela extend the abstraction by one more layer. They decouple the database itself from the underlying infrastructure. Instead of managing machines that run databases, teams interact with databases as consumable goods or services. The infrastructure becomes an implementation detail, not an operational concern.

This distinction is critical because replacing VMware with another hypervisor does not change the constraints. The issues are the same: long-lived environments, manual provisioning, limited portability, and infrastructure-driven design. It changes the vendor, but not the model.

Modernization requires changing the model. While VMware abstracts infrastructure. Our simplyblock and Vela stack abstracts the data platform itself.

With this modern platform-native architecture, the database is no longer defined by the VM it runs on. It is defined by its API, its lifecycle, and its data. Compute becomes ephemeral and interchangeable. Storage becomes persistent and shared. Orchestration is handled by Kubernetes, providing a consistent control plane across environments.

This shift also redefines the role of platform teams. Instead of acting as gatekeepers who provision infrastructure, they become enablers who define systems. Their responsibility moves from executing tasks to designing platforms that others can use autonomously. At their own pace. Whenever they need them.

From a strategic perspective, this reduces long-term risk. It removes dependencies on specific infrastructure vendors and aligns the data layer with open, widely open and adopted technologies such as Kubernetes and PostgreSQL. It also ensures that the platform is able to evolve as requirements change, rather than being constrained by legacy design decisions.

The question is no longer whether VMware can continue to run databases. It can. The question is whether that model aligns with how modern systems are built, operated, and consumed.

Your databases deserve these three shifts:

  1. Virtual machines as the unit of operation → APIs as the interface of data services
  2. Manually provisioned, long-lived databases → On-demand, ephemeral or persistent databases
  3. Infrastructure-driven constraints → Decoupled, scalable, and programmable infrastructure

This is why this shift should not be viewed as a replacement. Make the right call. It is a modernization of the entire data architecture.

You may also like:

Simplyblock + Xata: Powering the Future of Postgres with Next-Gen Storage
Simplyblock + Xata: Powering the Future of Postgres with Next-Gen Storage

We’re excited to congratulate our friends over at Xata on a significant milestone for developer infrastructure: Xata’s new PostgreSQL support is now backed by simplyblock’s NVMe/TCP software-defined…

Choosing the Right Kubernetes Storage Solution for Your Workloads
Choosing the Right Kubernetes Storage Solution for Your Workloads

TLDR: Choosing the right Kubernetes Storage isn’t easy. As a guideline for the selection, make sure you have the best of hyper-converged (co-located) and disaggregated setups. Also, make sure that…

NVMe & Kubernetes: Future-Proof Infrastructure
NVMe & Kubernetes: Future-Proof Infrastructure

The marriage of NVMe storage and Kubernetes persistent volumes represents a perfect union of high-performance storage and modern container orchestration. As organizations increasingly move…