Storage supplier DDN today made several announcements across its product line. Foremost was introduction of EXAScaler 6, the latest version of its Lustre-based parallel file system, which has been enhanced with AI-enabling and enterprise-friendly features, as well as the launch of Insight 4.0, DDN’s at scale system monitoring and management tool. Availability for those products is expected in Q3 this year.
DDN also reported it would now directly sell a greater portion of the IntelliFlash portfolio from Tintri – this is part of DDN’s ongoing integration of acquisitions, including Tintri, made in recent years. DDN also announced a new process by which certified reseller partners can sell DDN’s A3I with Nvidia DGX SuperPOD as a a single SKU. These new offerings allow channel resellers to more easily acquire and deploy solutions for enterprise workloads and at-scale AI infrastructures.
In a pre-briefing with HPCwire, Kurt Kuckein, vice president of marketing at DDN, described the announcements as part of an ongoing “core platform refresh,” so perhaps more announcements will the forthcoming. James Coomer, vice president of product management, said “With EXAScaler 6 we’ve brought in more usability and manageability. It brings in a new management framework called EMF, the EXAScaler Management Framework, to do that.”
EXAScaler 6 runs on EMF with APIs for configuration and management. DDN says it simplifies managing and upgrading systems through automation “by 10x over competitor solutions.” New features include full support for the latest Nvidia Magnum IO GPUDirect Storage, online upgrades, enhancements to automatic tiering, and Hot Nodes for client-side persistence. DDN positions EXAScaler 6 as a powerful file system for AI, analytics and HPC, and with a foundation for stronger security, enriched data services, and end-to-end data management.”
DDN reported EXAScaler 6 also adds new acceleration technologies with Hot Nodes, which automatically caches data on the local NVMe of Nvidia GPU systems, reducing IO latency and traffic by avoiding network round trips.
Talking about Insight 4.0, Coomer, “It’s an ‘at scale’ monitoring platform. So it takes from collectors around our EXAScaler systems and SFA (block storage) systems, and it brings all that data into a centralized database, and presents it to customers. We’ve been working for a long time to try to not simply provide customers with the usual [storage monitoring information] such as here’s the CPU’s memory consumption or its disk consumption. That’s necessary, but we’re trying to level up how advanced and how pertinent the data is.
“A good example of the issues customers have is when they have a file system, and the first time they know something may be wrong is they get a call or an email from one of their internal customers. What we’ve done here is integrate – through the file system into the scheduling system and into this Insight system – a method of seeing the jobs from the storage perspective. It is quite novel. As a storage environment, we can see what jobs are running and how much IO they’re pushing into the system individually. We can even see the jobs (on screen) ordered by consumption of throughput or IOPs or metadata. The administrator can immediately see which jobs are consuming the resources on a file system,” he said.
From and HPC/AI perspective DDN’s channel news is significant. Selling systems into the enterprise has long been a reseller strength and perhaps more so as enterprise customers adopt HPC and AI. DDN’s new certification program, through distributor Arrow, provides process for a reseller to become certified for bundling DDN storage as part of Nvidia DGX SuperPOD sales. Presumably, end-users could be assured that any DDN storage subsystem sold as part of a certified reseller’s SuperPOD would have been properly configured and sized.
The single SKU provides the accelerated compute hardware consisting of 20 Nvidia DGX A100 systems, and also includes the Nvidia InfiniBand networking infrastructure, all-flash storage systems and support in one integrated bundle.