Visit additional Tabor Communication Publications
September 21, 2011
DataDirect Networks (DDN) has announced the sequel to its original SFA10000 product. The SFA10K-X unveiled on Tuesday is the company's first major upgrade to its Storage Fusion Architecture product line originally launched in June 2009.
The new SFA10K-X (the X stands for extreme) is basically the same appliance as the SFA10000, providing lots of IOPS, capacity and bandwidth for petascale storage. But the 10K-X delivers about 25 percent more performance than its predecessor -- up to 15GB/sec of read-write bandwidth and 840,000 IOPS.
That was accomplished mainly with better software that is able to use the drives more efficiently. The company also moved the backend infrastructure to fully 6 Gbps SAS to ensure performance is maintained for drive rebuilds, data protection, and to feed high-speed SSDs. A single storage array couplet is equipped with twenty 6 Gbps by 4 cables, delivering close to a terabit per second of aggregate bandwidth.
The new SFA10K-X also bumps up the architecture's storage capacity. A single system can hold up to 1200 drives -- a mixture of SSD, SATA and SAS -- in just two racks. Fully populated with 3TB disk drives, a system can hold 3.6 PB, which increases the maximum storage capacity 50 percent over the SFA10000.
SSDs are available in 200GB and 400GB flavors, so a fully packed system could theoretically scale to 480TB of solid state memory. Given their expense though, they tend to be used at capacities that correspond to the application's most volatile data. According to Jeff Denworth, DDN's marketing VP, the SSDs are available for a tiered storage environment, or if the customer wants extreme IOPS for a portion of their application data, or for file system metadata.
You certainly don't need SSDs to hit the 15 GB/sec bandwidth rates, he says. That can be delivered with a pure hard disk configuration. We're not yet seeing a tipping point where it makes sense to use SSDs wholesale to get the best bandwidth economy from a system." explains Denworth. "We can get full performance using ExaScaler, GridScaler, Lustre or GPFS with just plain SATA drives."
Like the SFA10000, the SFA10K-X is aimed at the HPC market, but they're also positioning the appliance to move into what they're calling "adjacent markets" like big data, cloud, media streaming, and digital security -- basically any area with large content workloads with a hunger for both high bandwidth and IOPS.
To that end, DDN has incorporated VMWare and VMWare ESX support into the new offering, the idea being to speed up virtual server and desktop environments. VMWare support already exists in the high throughput (and disk-only) S2A9900 product, but this is the first such support for the company's SFA line.
Fortunately, all the software upgrades can be rolled back into the SFA10000 product, so existing customers will be able to take advantage of the non-hardware performance tweaks, plus the VMWare support. The back-fitted upgrades come free of charge. That's good news for customers like the French Atomic Energy agency (CEA), who bought an SFA10000 last year for their Tera-100 supercomputer. At an aggregate 300 GB/sec, it was already one of the fastest (if not the fastest) petascale storage systems on the planet.
The SFA10K-X is available immediately and has been shipping for some time, according to Denworth. Although no specific customers were mentioned in the press release, Denworth expects a large portion of SFA10K-X clients will be HPC users, as is reflective of DDN's main market focus. In fact, the new SFA upgrade points the way toward the next HPC storage milestone. Says Denworth: "Everybody here is focused on the needs of exascale."
But the expansion from HPC to those aforementioned adjacent markets, especially the big data space, is definitely part of DDN's larger strategy going forward. According to Erwan Menard, the newly appointed Chief Operating Officer (COO), the HPC storage technology that has propelled the company's success is poised to deliver its benefits where ever scalability, performance, and high capacity are required. In fact, his COO position was created largely to manage DDN's expected growth on the operations side.
Menard came from Hewlett-Packard where he was the VP and General Manager of the HP's Communications and Media Solutions Unit, in charge of 2,500 people. DDN only has a few hundred employees right now, but bringing Menard on-board gives some indication of the growth DDN has in mind.
In addition to Menard, who officially joined the company at the beginning of September, the company has been rather active in high-level hires over the past few months. In February, they brought in Jean-Luc Chatelain, another HP alum who is now the VP of Strategy and Technology for DDN. And just this week the company announced two other execs: Bill Cox, VP of worldwide channel sales and a new CFO, Chris O'Meara. In addition, John Dorman, who brings with him a background in financial services, was appointed to the Board of Directors.
Beyond the new blood at the top, there are also 70 job openings at the company, with positions available in the areas of product support, professional service, and R&D. In the latter area, they're especially looking for help on the software front, which has become the main differentiator for most storage companies, HPC or otherwise.
The company appears quite healthy overall. Although prospects of an IPO, which they floated in 2008, has faded for now, DDN has weathered the recent downturn in the overall economy rather well. Over the most recent three years, 2007 through 2010, DDN recorded an 83 percent growth rate and now claims an annual revenue of about $200 million a year. That makes it several times larger than Panasas, the only other remaining pure-play HPC storage system vendor.
And despite DDN's aspirations to tackle big data and related markets, Menard maintains the company will simultaneously remain true to its high performance computing roots -- the implication being that other HPC storage vendors like BlueArc and Isilon, which were swallowed by larger storage companies, will drift away from their original focus. "We are the ones truly committed to HPC," he says.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.