Visit additional Tabor Communication Publications
November 10, 2010
In the midst of a management and business strategy revamp, Panasas is launching PAS 12, its newest parallel storage system. PAS, which stands for Panasas ActiveStor, is the company's flagship NAS storage line meant to serve HPC and similar performance-critical enterprise applications. PAS 12 is the fourth generation of the product, and is being touted as "the world's fastest parallel storage system."
Like its predecessors, PAS 12 is targeting data-rich high performance computing applications, in particular, seismic analysis, CFD, bio/pharm research apps, and manufacturing design.
Like its predecessors (PAS 7, 8, and 9) PAS 12 uses the same plug-and-play storage blade architecture and features the company's home-grown PanFS parallel file system. But the newest entrant boasts much better I/O bandwidth, metadata performance, scalability, as well as some features that make it a more capable players in the datacenter.
It's not the cheapest storage solution on the market by any means. PAS 12 is offered in modular configurations starting at 40 TB of storage for $110,000 in a 4U chassis (one director blade plus 10 storage blades). A single director blade can be had for $30,000 if you want to incorporate some PAS 12 functionality into existing PAS set-ups. And there is a good reason to do just that, which I'll get to in a moment.
First to the numbers. Each 4U storage chassis delivers 1.5 GB/sec of throughput, which works out to 15 GB/sec per rack. Fully-scaled to 10 racks, a PAS 12 system provides a whopping 1.5 TB/sec of I/O. That represents a 2.5-fold performance increase over the PAS 8 system introduced in 2009. NFS performance is getting a big boost as well, with IOPS increasing from 3,500 to 7,000, and read and write bandwidth soaring from 70 and 80 MB/sec up to 300 and 450 MB/sec, respectively. All of this is made possible by moving to beefier Intel Xeon "Nehalem"-based storage blades, which exploit the more powerful 64-bit processors and additional memory.
Storage capacity is getting a nice increase as well. PAS 12 scales from 40 TB (one 4U box) up to 4 PB (10 racks). That means you could build a 4 petabyte file system under a single global namespace. Those numbers will bump up as drive capacities increase beyond 2 TB. And since PAS 12 has moved to a 64-bit architecture, the new system will be able to directly address all those extra bytes.
Metadata performance is also getting a big boost -- 2.5 times that of the previous PAS technology. That's especially important to many HPC applications that tend to bottleneck around metadata access. Better yet, customers who own existing PAS gear can slide in a PAS 12 director blade seamlessly and get the metadata performance boost instantly.
One new capability that Panasas is touting is its "Object RAID" feature. Basically, the RAID protection has been integrated into the PanFS operating system, precluding the need to include a separate RAID controller. The RAID integration turbo-charges the system's parallel rebuild performance, which Panasas claims is the best in the industry.
Another new feature in PAS 12 is the addition of user quotas, which allows an IT administrator to parcel out storage capacity and institute billing on a per user basis. The idea here is to be able to treat the storage as a central resource for multiple computing systems, perhaps even a whole datacenter -- less than a cloud, but more than a silo.
This last feature points to the company's intended new direction, which is to broaden its reach beyond the traditional HPC space, or at least beyond the HPC market segments that Panasas has been especially strong in. Part of this strategy shift began last April when the company brought in Faye Pairman as president and chief executive officer. In fact, the whole management staff is transitioning to a more business-focused bunch. "Panasas is in the process of building a new management team, literally at all levels of the company -- CEO, marketing, sales, engineering... everything," says Panasas chief marketing officer Barbara Murphy, who herself came on-board just three months ago.
According to Murphy, the immediate goal is to stay focused on HPC, but the longer-term vision is to begin penetrating more deeply into the commercial enterprise space. Currently about 30 percent of the company's revenue comes from the energy sector (oil and gas applications) and another 30 percent from the government (mostly at research labs). The other 40 percent is strewn across universities, aerospace, finance, manufacturing, automotive, and bio/pharma.
In some cases, they are very thinly spread across these other segments. A good example is the aerospace sector, where Panasas can claim just a single customer: Boeing. Murphy says they just haven't scaled that success and actively gone after other aerospace customers such as, for example, Airbus or the European Space Agency.
To do that, she says, they're going to have to turn on the marketing machine and get away from relying almost solely on a direct sales model. "I think it's very normal for an early-stage company to be very engineering and sales driven," says Murphy. "It's taking that success and bringing that to a broader audience."
Educating the customer on how the parallel storage technology fits into their business is the other element to this. During a recent engagement with a hedge fund group from a major bank, Panasas found out that the developers there spent a great deal of time fine-tuning their Monte Carlo simulation to deal with the storage I/O bottleneck. Effectively the hedge fund group had to compromise the fidelity of the algorithm so that they could get an answer back in time to make an investment. They weren't aware that parallel storage technology could address that bottleneck. "For them, this a breakthrough technology and a complete paradigm shift," explains Murphy.
Despite those challenges, Panasas has managed to maintain a strong balance sheet (or so they say -- being a private company, they have never offered up specific profit/loss figures). But according to the company, sales have been growing for five consecutive years, with 50 percent year-over-year revenue growth in FY10. They currently claim 300 active customers in more than 50 countries, and have increased the customer base by 50 percent since 2009. Those are enviable numbers for any company, but especially during some of the most challenging economic times in decades.
Scaling that success into a profitable long-term business is the next phase for Panasas. With an established customer base and cutting-edge parallel storage technology, it certainly seems to have the fundamentals in place.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.