Flash Forward

By Michael Feldman

September 29, 2011

Violin’s Memory’s launch this week of its latest and greatest flash memory arrays for primary storage got me to thinking about how far and how fast solid state storage has traveled over the last few years.

Gone are the days when enterprise-grade flash was only considered for caching hyperactive data, aka tier 0 storage, layered on top of a largely disk-based storage systems. We’re now seeing a much more generalized solid state storage solution, encouraging at least one writer to state the case more starkly with an article titled, Violin Memory: This Is The Impact Event Before The Extinction Of Hard Disks.

While Violin is among the better-known, and more successful solid state storage vendors, it’s certainly not the first to go after tier 1 disks in the datacenter. Both Texas Memory Systems (TMS) and Nimbus Data Systems have SSD boxes that target primary storage.

Those two employ enterprise multi-level cell (eMLC) flash technology to achieve a product that is on par cost wise with 15K disk-based arrays. Compared to single-level cell (SLC) flash, eMLC is somewhat less performant and needs more attentive error correction, but it is much less expensive.

Violin, with its newest 6000 series flash arrays, have both SLC and standard MLC flavors, but wraps a lot of enterprise goodies into the systems, such as high availability, redundancy, and serviceability. Violin is not making pricing public on the new product line, so there is no way to compare its offerings to those of Nimbus and TMS.

Even before Violin’s 6000 boxes were launched, the company was already bumping against (and in some cases, displacing) storage stalwarts like EMC and NetApp, two companies that sprinkle flash atop their disk-based storage. Vendors like Violin, TMS, Nimbus and Huawei Symantec think they can skip that flash-cache approach with their latest all solid state arrays.

These vendors think they’ve solved the up-front cost gap, at least with regard to Fibre Channel and SAS 15K disk systems (but not the lower cost SATA drives). Although the price per GB of flash versus disk componentry is still fairly wide, even for eMLC, once you wrap a complete storage system around it, the price differential shrinks away. Both TMS and Nimbus, for example are in the $12 to $13/GB range for their flash system products.

On the other hand, no one that I know of is arguing that disk storage is going away completely. For capacity storage, especially where the data isn’t in constant read/write demand, disks will be the technology of choice for the foreseeable future. The “flash and trash” model, where all active data will be on flash and the rest will be relegated to low-cost SATA drives, is where a lot of people in the industry think we are headed.

For the high performance computing crowd, the story may be a little different. At the upper edge of HPC, capacities are just too darn big for flash to swallow whole. The just-announced 55-petabyte NetApp storage system for the upcoming Sequoia supercomputer to be installed at Lawrence Livermore National Laboratory, could certainly not be accomplished with a solid state setup today. Even at the aforementioned $12/GB price point, such a system would cost well over $600 million.

That said, smaller HPC customers could certainly make flash a bigger part of their lives, as some commercial and government customers are already doing. Nimbus has installed 100 TB of its flash installation at eBay, and Violin has two petabyte-sized deployments of their memory arrays, one at AOL and the other at a US government agency. Given the 10-fold or so cost advantages in power and floor space, even premium-priced flash could make economic sense for reasonably large systems, and especially so for the kinds of data-intensive workloads that are becoming more and more common in HPC.

The largest flash storage deployment in HPC looks like it will be the Gordon supercomputer at the San Diego Supercomputer Center (SDSC). That system, built by Appro, will be outfitted with 300 TB of the new Intel iSolid-State Drive 710 Series , enough to deliver 35 million IOPS to data-hungry science applications. According to the press release, “SDSC has taken delivery of Gordon’s 64 I/O nodes equipped with Intel’s 710 Series, and they are already available to users of Dash, a smaller, prototype version of Gordon.”

As announced at IDF, the new Intel SSD parts are based on the less expensive, higher capacity standard MLC technology, but use Intel’s own High Endurance Technology (HET), which the company claim offers “the same high levels of performance as single-level cell (SLC) memory but at a more attractive price point,” which according to various sources, looks to be about $6.45/GB. Keep in mind these are storage drives, not the more full-featured flash SAN boxes mentioned above.

A lot of HPC installations are probably going to gravitate toward these standalone SSDs or even PCIe connected flash devices so that solid state storage can be integrated intimately into the server infrastructure and give the best performance boost for the buck. On the other hand, Nimbus has revealed they have number of HPC customers for their flash storage boxes in oil and gas, financial services, life science, and education. There’s no reason to think that other like-minded users won’t start adopting the technology too as it proves itself.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Symposium on Computer Architecture (ISCA) in Los Angeles. The Read more…

By Staff

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This