Visit additional Tabor Communication Publications
September 17, 2012
Panasas has launched ActiveStor 14, the company's fifth-generation storage applicance aimed at high performance computing. The new offering adds solid state drives (SSDs) to what has been almost exclusively a hard disk-based (HDD) NAS storage line-up. The inclusion of SSDs into the company's flagship offering is further proof that flash memory has become a mainstream storage technology for accelerating HPC workloads.
It's also a recognition that HPC storage is more that just about streaming lots of data from cheap SATA drives. This has been the case for some time, even if the customers themselves were unaware of it. When Panasas surveyed 10 typical HPC sites (across government, finance, academia and manufacturing), it was found that 50 to 70 percent of their files fell into the "small file" category -- defined as less than 64KB.
This was true even for those users whose storage capacity was dominated by very large files, and who believed high-throughput I/O was the crux of their storage needs. "The reality is that almost all customers are dealing with mixed workloads." says Geoffrey Noer, Sr. Director of Product Marketing at Panasas.
The presence of so many small files suggests that directory information retrieval and random I/0 performance is a critical requirement across many HPC sites. At the same time, these users had a concrete need for high levels of streaming performance to feed at least a portion of their applications. The need for both bandwidth and IOPS from Panasas' customer base was central to the inclusion of SSD hardware alongside high capacity SATA drives in ActiveStor 14.
Using flash technology isn't exactly virgin territory for Panasas. In 2009, they came out with ActiveStor 9, an SSD-accelerated storage appliance that was aimed specifically at small file I/O. Since it depopulated disk slots in favor of SSDs though, ActiveStor 9 suffered on the throughput side.
With ActiveStor 14, the company delivers both IOPS and throughput by marrying big SATA disks with the latest SSD technology. The idea is to store all the file metadata on the SSDs as well as all the small file data. The idea is to put the vast majority of the "hot" data into flash, which should greatly increase I/O performance.
There are three ActiveStor 14 models available, each with its own mix of HDD and SSD capacity on the storage blade to serve different application profiles.
1. For large file throughput-oriented applications, each blade houses two 4TB SATA disks, one 120GB SSD and 8GB of cache. This is aimed at energy exploration, government, manufacturing, and academia. List price for 81.2TB of storage is $125K.
2. For more mixed workloads, they've come up with a blade identical to the one above, but with a 300 GB SSD. This configuration is targeted at analytics for biosciences, especially genomics. List price for 83TB is $145K.
3. For truly file heavy, random IOPS applications, they have a blade with two 2TB SATA disks, one 480GB SSD, and 16GB of cache. Panasas calls this one the ActiveStor 14T (for turbo) and it's aimed at financial analytics, like Monte Carlo simulations for arbitrage modeling. Because of the greater ratio of SSD storage to HDD, this is the most expensive model, with a list price of $160K for 44.8TB.
But you get the performance you pay for. A shelf of ActiveStor 14T, with 27 data drives, delivered 20,745 operations per second (SPECsfs2008_nfs.v3) and an overall response time of 1.99 milliseconds. Using two shelves, operations per second doubled to 41,116 and overall response time was cut to 1.39 milliseconds.
It's no surprise Panasas developed ActiveStor configurations designed specifically for financial services and biosciences, since those two verticals have seen the biggest uptick in sales at Panasas over the last year. According to company chief marketing officer Barbara Murphy, they have seen revenue grow by 5X in the financial sector and nearly 2.5X in biosciences since 2011.
Panasas revenue, in general, has been growing at a nice clip over the past 12 months, and actually has been on the rise for four straight years. But commercial sales are accounting for a much greater share now: from about 55 percent in 2011 to over 70 percent in 2012. With public spending on the wane a bit, and businesses investing more heavily in HPC, Panasas intends to put a lot of energy into serving these markets. "We believe that the commercial space is going to adopt high performance compute very aggressively over the next couple of years," Murphy told HPCwire.
Over that same period, Panasas has grown their customer base from 300 to 400. They've has done so by relying more on their OEM partners, like Dell, HP, SGI, and Bull, to expand the revenue base. Today Panasas claims a nice selection of elite customers, including seven of the top 10 public oil & gas firms, NIH, the Beijing Genomics Institute, UCLA, Leicester University, and BNP Paribas.
In ActiveStor 14, they believe they have a platform that can compete very well in the analytics domain, especially against more expensive offerings from EMC Isilon and NetApp. At a price point of $4/GB price point for the 14T model and under $2/GB for the less SSD-rich models, they compare favorably with the $12/GB price of Isilon's S200 and $9/GB for NetApp's FAS6240. The latter delivers slightly better performance (using the SPECsfs2008_nfs.v3 benchmark), but is more than twice as expensive on a capacity basis.
Despite the ActiveStor 14 design being suitable for a range of analytics-type applications, Panasas will continue to focus on its technical computing/HPC roots, according to Noer. "Our goal here is not to go after the enterprise market," he says. With the HPC on a very healthy growth trajectory, that should be enough to keep Panasas on its upward revenue path.
The company is demonstrating ActiveStor 14 gear this week at the HPC for Wall Street event in New York City and at the Trading Architecture Europe conference in London. Systems can be ordered today, with shipments slated for November 2012.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.