Enterprise storage has long been a trade-off between capacity, speed, and rack space. What if a single 1U chassis could hold 1.4 petabytes of NVMe while delivering 60GB/s throughput—without requiring a forklift upgrade when scaling? That’s the promise of TrueNAS’s latest platform, the R60, which targets AI, high-performance computing (HPC), and analytics workloads where data density and low-latency access are critical.

The R60 isn’t just another storage array. It’s a rethinking of how enterprises balance performance, cost, and control in an era where datasets are exploding and traditional architectures struggle to keep up. But how close does it come to replacing sprawling, high-maintenance data centers?

What People Might Assume

Many organizations assume that high-density storage means sacrificing performance or flexibility. The conventional wisdom goes that petabyte-scale NVMe arrays would either

  • Be too expensive to justify for all but the largest enterprises.
  • Require proprietary formats that lock in customers to a single vendor.
  • Suffer from bottlenecks when scaling beyond a few nodes.
  • Demand specialized expertise to deploy and maintain.

There’s also the assumption that NVMe’s speed advantages only matter in niche use cases—like real-time analytics or AI training—rather than everyday enterprise workloads.

What’s Actually Changing

The R60 flips these assumptions by combining hardware and software optimizations tailored for the NVMe era. Here’s how

Hardware: Density Without Compromise

The R60 comes in two configurations: the R60-S (targeting mid-range needs) and the R60-P (built for extreme scale). Both share a focus on PCIe Gen5 NVMe and a fifth-generation controller, but they differ in core count, memory, and expandability

  • R60-S: 16 cores (32 threads), 192 GB DDR5 at 6400 MT/s, max 736 TB all-flash, 7 PB with expansion.
  • R60-P: 32 cores (64 threads), 384 GB DDR5 at 6400 MT/s, max 3.6 PB all-flash, 7 PB with expansion.

What stands out is the capacity-to-space ratio. A single R60-P chassis can hold up to 3.6 petabytes of NVMe—enough for large-scale AI training datasets or high-resolution media archives—while fitting into a single rack unit. Expansion shelves push that to 7 PB, all while maintaining the same architecture. This means no need to rip and replace systems as data grows.

Networking is another innovation. The R60 supports up to two 400GbE ports, ensuring storage won’t become a bottleneck for high-bandwidth workloads like vector search or real-time inference. For organizations using GPUs or TPUs, this translates to fewer stalled jobs waiting for data.

Software: OpenZFS for the NVMe Age

The R60 runs TrueNAS 25.10 Goldeye, a version of the operating system optimized for dense NVMe environments. Key improvements include

TrueNAS R60 Redefines Enterprise Storage: Can It Replace Traditional Data Centers?
  • Parallel ARC eviction: A smarter cache management system that reduces contention when multiple threads compete for hot data, leading to 20–25% higher throughput in mixed workloads.
  • NVMe over Fabrics: Support for TCP and RDMA, enabling NVMe storage to extend beyond the chassis with microsecond-level latency—critical for low-latency applications.
  • Automation and integration: JSON-RPC 2.0 and WebSockets for API-driven provisioning, plus Kubernetes CSI driver support for dynamic persistent volume creation. This aligns with modern DevOps and GitOps workflows.

OpenZFS remains the backbone, offering data integrity through block-level verification and self-healing—features that matter for regulated environments or long-term datasets where corruption could have costly consequences.

Cost and Control: No Lock-In, No Surprises

TrueNAS emphasizes data portability by keeping datasets in an open format. This means no vendor lock-in; organizations can move data between systems without proprietary tools. Inline deduplication and compression (with ratios of 2.5:1 or higher) also reduce effective costs by maximizing NVMe capacity.

Support is another differentiator. TrueNAS offers direct access to engineers rather than ticket-only systems, which can be a game-changer for teams integrating storage with fabrics, Kubernetes, or AI pipelines. For evaluation, the TrueNAS Community Edition provides a risk-free way to test workflows before committing to hardware.

What It Means Now

The R60 isn’t a silver bullet, but it addresses several pain points in enterprise storage

  • For AI and HPC: The combination of high-density NVMe and low-latency fabrics reduces data wait times, improving GPU/CPU utilization during training or inference.
  • For analytics: Parallel access patterns and fabric support make it easier to handle large-scale scans without performance degradation.
  • For cost-sensitive deployments: Inline deduplication and predictable pricing models help avoid the hidden costs common in usage-based storage models.
  • For IT teams: Automation and Kubernetes integration streamline provisioning, while OpenZFS ensures data isn’t trapped in a proprietary format.

However, the R60 isn’t a drop-in replacement for every workload. Organizations with legacy SAS/SATA environments may still need a migration path, and the highest-end configurations require careful planning around power (up to 600W for the R60-P) and cooling. But for teams building petabyte-scale NVMe infrastructures, it represents a significant leap forward in efficiency and control.

The bigger question is whether this level of density and performance will drive broader adoption of NVMe in enterprise storage—or if it remains a solution for early adopters. One thing is clear: the R60 forces a reckoning with the old trade-offs between capacity, speed, and cost.