Press Release
Simplyblock Achieves Industry-Leading Throughput in MLPerf® Storage v2.0 Benchmark

Cloud-native platform delivers >128 Gbit/s, 3 GiB/s per GPU, and scalable I/O for AI/ML and analytics
August 4, 2025 – Teltow, Germany: Simplyblock, the cloud-native data platform for high-performance infrastructure, today announced its results in the MLPerf® Storage v2.0 benchmark, demonstrating exceptional throughput, efficient scaling, and cloud-native simplicity for AI workloads and beyond.
The MLPerf Storage benchmark suite measures how well storage systems handle I/O demands generated by large-scale machine learning. Simplyblock’s submission showcased a minimal three-node setup delivering over 128 Gbit/s (16 GiB/s) total throughput while sustaining an industry-leading throughput per accelerator, all running on standard cloud infrastructure.
“High-throughput, low-latency storage is essential for keeping AI and analytics workloads performant,” said Rob Pankow, CEO of Simplyblock. “These results prove you don’t need expensive hardware to unlock AI-scale performance.”
Strong Results Across AI Workloads
Simplyblock tested all three MLPerf Storage v2.0 workloads in CLOSED configuration, meaning no benchmark-specific tuning or hardware customization was applied. The platform ran on Google Cloud Compute Engine (c4a-standard-64-lssd), accessed by just one or two clients. Storage was attached using the NVMe-over-TCP protocol.
In ResNet50, which simulates image classification workloads with high sample throughput, simplyblock delivered 180 MiB/sec per accelerator across setups with 44 to 76 H100 GPUs, reaching over 125,000 samples/sec in total.
For CosmoFlow, modeling bandwidth-heavy scientific simulations, simplyblock achieved 530 MiB/s per accelerator across 30 H100 GPUs, with total read throughput exceeding 130 Gbit/s.
In U-Net3D, which reflects unstructured data access patterns common in medical imaging and generative AI, the system delivered 3 GiB/s per accelerator using just four H100 GPUs, demonstrating both high bandwidth and low latency under pressure.
Cloud-Native Storage for AI and Databases
Beyond AI training, simplyblock’s architecture offers significant advantages for modern data-intensive applications such as analytics platforms and AI-augmented databases. The same low-latency, high-throughput storage layer that supports GPU workloads is also ideal for accelerating vector search engines, OLAP analytics, and distributed SQL databases—especially those integrated with AI inference or real-time dashboards.
By enabling fast, parallel access to persistent volumes with multi-client coordination, simplyblock enhances query performance, reduces index load times, and improves consistency across streaming and transactional systems. This makes it a natural fit for both AI research labs and data-driven enterprises building scalable, cloud-native infrastructure.
Purpose-Built for Modern Infrastructure
The Simplyblock Platform is designed from the ground up to be NVMe-over-TCP native. It is fully orchestrated via Kubernetes, enabling teams to integrate storage provisioning into containerized workflows seamlessly. With dynamic volume sharing, thin provisioning, and multi-cloud compatibility, including full support for GCP and AWS, simplyblock allows users to scale storage in lockstep with compute while maintaining full control over performance and cost.
“This benchmark validates our belief that high-performance infrastructure is no longer just for hyperscalers,” added Pankow. “Simplyblock brings AI-grade performance to every cloud team, making it easy to scale, optimize, and innovate.”
The Simplyblock Platform, natively built on NVMe-over-TCP, enables enterprises to take complete control of their data sovereignty and application performance, while empowering AI and data teams to scale with confidence, ensuring every GPU and CPU cycle is maximized.
About MLCommons
MLCommons is the world’s leader in AI benchmarking. An open engineering consortium supported by over 125 members and affiliates, MLCommons has a proven record of bringing together academia, industry, and civil society to measure and improve AI. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. Since then, MLCommons has continued using collective engineering to build the benchmarks and metrics required for better AI, ultimately helping to evaluate and improve AI technologies’ accuracy, safety, speed, and efficiency.
About Simplyblock
Simplyblock is a cloud-native data platform built for AI, analytics, and high-performance infrastructure. Its software-defined storage engine delivers NVMe-over-TCP Kubernetes storage with minimal latency, seamless orchestration, and dynamic scaling, powering the next generation of GPU and database workloads.
Media Contact: [email protected]