Panasas Opens New Windows for Wider HPC View

By Nicole Hemsoth

March 11, 2014

Like many other storage companies with roots in HPC, Panasas is leveraging its history in some of the most demanding environments to bridge the technical to commercial computing divide.

According to the company’s Geoffrey Noer, just three years ago, most Panasas customers were in traditional HPC, scattered across a wide number of users in academia and government. That HPC to enterprise leap happened naturally for them, he says, as hybrid scale-out NAS has taken root in more commercial HPC environments where current legacy-based approaches are increasingly overextended and difficult to manage.

Among the new commercial HPC and analytics users Panasas has managed to capture are companies in aerospace, life sciences, and media/entertainment. “These are usually design and simulation workflows,” says Noer, “which by definition is HPC, but it’s for enterprise customers.” These newer users for Panasas are seeking to overcome critical barriers that a truly scale-out architecture can provide, and now, with the today’s release of their updated storage operating system, PanFS 5.5 release, they can provide a single namespace to let users in Windows-heavy enterprise shops tap into Windows and Linux seamlessly. This Microsoft tie-in is the result of two years of development to get the two to play nicely together within their storage environment and to ensure continued certification through Microsoft’s Communication Protocol Program. Such development sounds rather expensive, but Panasas says that there are no plans to change pricing to reflect the extra Microsoft hoop-jumping.

According to Noer, the lengthy process through Microsoft’s channels will be useful for both their traditional HPC center users and the enterprise customers they’re seeking to reach. “If you look at a large cluster, it’s running Linux for the ultra high performance part, but it you look at what an engineer or researcher is running on a workstation, or they’re working with multiple applications on Windows or Linux, this becomes very important.”

Panasas is being realistic about the performance issues related to Windows for commercial HPC customers, noting that even with this PanFS 5.5 update with new windows open, the highest performance workflows stay in a Linux environment. “The current Windows protocol can’t hit the performance levels of our DirectFlow protocol in Linux, but that’s inherent to the protocol itself,” said Noer. The key is that the interoperability is “enterprise-grade” which to Panasas, means that the the handshaking between the Active Directory and the storage system to keep track of users and groups has to be seamless and up to Microsoft standards.

These added Windows to new opportunities are open wider with a scale-out NAS approach that does some interesting things between leveraging SATA and SSDs for the purposes they were designed for via ActiveStor 14, their latest integrated hardware update.

actstordetails

The key to what Panasas is doing on the macro level (with ActiveStor and PanFS in harmony) is taking advantage of an architecture that Noer says was “designed from the ground up for technical computing workloads,” since Panasas was never “hinged on adapting to a legacy architecture.”He points to the widely-used NetApp approach in commercial environments as an example of this legacy problem, pointing to the way that users are pushed into adding filer heads to push performance. While this may work, what users end up with are several storage pools that are difficult to manage. “It’s hard for users to get off that architecture and onto one that’s truly scaleout.”

The goal is to give users a platform that’s free from file server lag or hardware RAID showdowns by instead offering distributed elements that the IO is balanced over, which is managed with DirectFlow. This protocol lets users read and write in parallel across all those different elements instead of using older point-to-point protocols that scale simply by adding more clients.

panasascompare

The other key to what Panasas is doing is by taking metadata requests off the same data path as the large read/writes and cooking them directly onto the “Director” blades, which manage those requests while the real grunt work is saved for the storage blades that handle the big read/write demands. The goal is to allow users to scale their metadata performance separately to avoid that I/O conflict. This isn’t entirely new—Lustre and GPFS manage things essentially the same way, but the difference here, at least according to Noer, is the orchestration at the PanFS level.

“When you look at the HPC space, you had software-only file systems that could provide great performance, but the kind of reliability, high availability and manageability of something fully integrated. Then you also have the top tier storage vendors who don’t have the performance levels needed for HPC, even if they’re able to provide the enterprise-grade features. We’re trying to do all of that in one place,” says Noer.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Debuts Turing Architecture, Focusing on Real-Time Ray Tracing

August 16, 2018

From the SIGGRAPH professional graphics conference in Vancouver this week, Nvidia CEO Jensen Huang unveiled Turing, the company's next-gen GPU platform that introduces new RT Cores to accelerate ray tracing and new Tenso Read more…

By Tiffany Trader

HPC Coding: The Power of L(o)osing Control

August 16, 2018

Exascale roadmaps, exascale projects and exascale lobbyists ask, on-again-off-again, for a fundamental rewrite of major code building blocks. Otherwise, so they claim, codes will not scale up. Naturally, some exascale pr Read more…

By Tobias Weinzierl

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum technology used. One idea is to mitigate noisiness and perh Read more…

By John Russell

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Super Problem Solving

You might think that tackling the world’s toughest problems is a job only for superheroes, but at special places such as the Oak Ridge National Laboratory, supercomputers are the real heroes. Read more…

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak) supercomputer that will be used to advance early-stage R&a Read more…

By Tiffany Trader

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum Read more…

By John Russell

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

SLATE Update: Making Math Libraries Exascale-ready

August 9, 2018

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remain Read more…

By John Russell

Summertime in Washington: Some Unexpected Advanced Computing News

August 8, 2018

Summertime in Washington DC is known for its heat and humidity. That is why most people get away to either the mountains or the seashore and things slow down. H Read more…

By Alex R. Larzelere

NSF Invests $15 Million in Quantum STAQ

August 7, 2018

Quantum computing development is in full ascent as global backers aim to transcend the limitations of classical computing by leveraging the magical-seeming prop Read more…

By Tiffany Trader

By the Numbers: Cray Would Like Exascale to Be the Icing on the Cake

August 1, 2018

On its earnings call held for investors yesterday, Cray gave an accounting for its latest quarterly financials, offered future guidance and provided an update o Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This