Why commercial HPC is different: A look at the storage requirements in commercial HPC settings
For years, government research labs and major academic computing centers have been the proving ground for all new high-performance computing (HPC) technology such as the latest CPU’s and GPU’s, high-performance networking and parallel file system storage. These organizations continually push the envelope with custom technology builds that allow them to run the most realistic models and simulations, and work with the largest datasets. Eventually, most of the HPC technology used in these labs makes its way into commercial HPC settings.
Unfortunately, businesses often encounter problems using these advanced technologies. The problems arise because the high-end labs and government centers have different priorities. When selecting technology, their main driver is performance at any cost, causing uptime, ease of management, and vendor support to fall far behind in consideration.
Why commercial HPC is different
While some commercial HPC compute requirements in fields like life sciences, energy exploration, manufacturing, finance, movie production, etc. are comparable to those in the large supercomputing centers and government labs, there are vast differences in systems requirements.
Commercial HPC cannot afford downtime, so reliability is paramount. Nor do these businesses have the bevy of graduate students and IT staff available to manage complex installations as the supercomputing centers do. That’s unfortunate since HPC solutions, especially those that use the latest technologies to achieve top performance, are complex and often require large staffs to manage, maintain, and keep running. Many commercial HPC installations have been blindsided by research-lab-driven complexity and downtime, watching their organization’s productivity and competitiveness suffer as schedules slipped.
Going for a compromise
Commercial users must make their HPC choices by considering performance, potential downtime, and management complexity trade-offs. When faced with such a decision, many commercial HPC users opt for what they see as a compromise. They often trade-off performance for simplicity and reliability, so their organizations can stay on mission, and just as importantly, so they can sleep at night.
A good example of this is what has happened with high-performance storage. Many commercial companies do not go for the GPFS, Lustre, and BeeGFS parallel file system solutions to deliver the highest performance because of their notorious complexity and unreliability, and instead select scale-out NAS, trading off performance for simplicity and reliability. They pay a premium for service and support to ensure uptime and off-load their management burden.
For smaller deployment, users feel the performance is adequate. However, as they try to grow these systems, there is a significant performance drop off. The drop-off is related to the ease of use these systems tout. Specifically, to deliver simplicity on the front-end, the back-end handles chores. As a system scales and more data traffic must be accommodated, that back-end becomes overwhelmed.
Panasas, The Best of Both Worlds
Panasas offers a unique alternative: the performance of parallel file systems with the reliability and simplicity of scale-out NAS. Panasas ActiveStor® Ultra storage solutions deliver the high performance, low management, and high reliability needed in commercial HPC environments. And unlike scale-out NAS solutions, Panasas delivers consistent, superior performance that linearly scales. There is no performance drop off or erratic performance, as is commonly experienced with scale-out NAS systems.
At the heart of the Panasas HPC storage solution is the PanFS® parallel file system, the operating environment for the ActiveStor Ultra architecture. PanFS maximizes all storage media’s efficiency (NVMe Flash, low latency SSD, high capacity HDD, and NVRAM) in a seamless, total-performance storage system that automatically adapts to changing file sizes and workloads, delivering consistently superior performance for today’s demanding workloads. A scale-out object back-end supports limitless scaling, while optimal data placement and an internally balanced architecture boost efficiency—all with simple deployment, operation, and maintenance.
PanFS uses Dynamic Data Acceleration to optimize mixed-workload performance with rapid access to large and small files alike. Dynamic Data Acceleration provides all-hot data access, eliminating the complexity and manual intervention of tiered HPC storage systems, maximizing the performance of diverse storage media in a single, seamless storage system. ActiveStor Ultra with PanFS offers the industry’s leading price/performance in an appliance that maximizes simplicity, boosts reliability, and delivers the lowest TCO. With PanFS, commercial HPC users get the fastest parallel file system at any price point with enterprise-class reliability, frustration-free manageability, and great support.
To learn more about how to meet commercial HPC demands, visit www.panasas.com.