Solid State Drives: Change in a Flash

By John West

March 26, 2009

There has been a lot of interest among the enterprise datacenter crowd lately in a relatively old technology: solid state drives (SSDs). Today’s flash drives are faster and cheaper than their predecessors, and are almost certain to assume a place in the standard enterprise IT architect’s toolkit. But it seems that they have quite a bit of potential in HPC too, though not (just) in the way you might think.

When I showed up at my first HPC gig in the early 1990s, our Crays had solid state disks, and they weren’t even close to new then; semiconductor memory-based SSDs date back to the 1970s and 80s. But they were expensive, and they didn’t really have a place in the commodity-driven economics of the commodity-processor supercomputers that emerged beginning in the mid-90s.

So what is an SSD? The Wikipedia entry says:

A solid-state drive (SSD) is a data storage device that uses solid-state memory to store persistent data. An SSD emulates a hard disk drive interface, thus easily replacing it in most applications.

Michael Cornwell, lead technologist for flash memory technology at Sun, has a similar definition in few words, “An SSD is a non-rotating device that emulates what a disk drive does.”

The reason for the renewed interest in this old idea comes down to money. The new generation of SSDs are being built from NAND flash components, the kind of nonvolatile memory used in everything from USB memory drives to cameras and iPods. Driven by the demand in the consumer market, SSD prices have dropped considerably. You can see this effect for yourself when you head down to Best Buy and find that you can buy a 4 GB flash drive for less than $15.00. Just a few years ago, that amount of flash memory would have cost you hundreds of dollars.

This demand also caused the flash memory industry to leapfrog the DRAM industry in terms of the size of the silicon process used to create the chips. David Flynn, chief technical officer and co-founder of Fusion-io, explains that all of this has come together to make NAND flash a very attractive storage option. “Flash memory costs less per bit (than DRAM), doesn’t put off heat, and you can stack it vertically into packages and then stack the packages,” putting a lot of bits in a very small space.

Flash-based SSDs have many inherent advantages over spinning disks for storage that make them attractive to system architects. In addition to being dense and relatively cool there are no moving parts and, unlike hard disk drives, flash-based SSDs can support between 10 and 20 operations at the same time, making them inherently parallel devices. Flash storage also typically has at least three orders of magnitude lower latency than traditional spinning drives (microseconds versus milliseconds).

Sun’s Cornwell says that, as an example, Sun’s recently announced SSD offers “thousands of IOPS, which is much greater than the 300 or so you can get from traditional hard disk drives.” SSDs also offer substantial power savings, consuming an order of magnitude less power than hard disk drives.

Sounds great, so let’s pull out all the disks and replace them with SSDs, right? Not so fast, says Jimmy Daley, HP’s Industry Standard Server (ISS) marketing manager. First of all, cost is an issue. While flash-based SSDs are much faster than traditional spinning disks, they are also “an order of magnitude or two more expensive per GB than disk.”

There are also other issues, like the disparity of read and write speeds. For example, Cornwell says Sun’s SSD solution achieves 35,000 IOPS on read, but only 3,300 on write — a big difference that may need to be considered, depending upon your application. On the other hand, Flynn maintains his company’s ioDrive keeps write throughput within shouting distance of its read performance and recent tests by Tom’s Hardware seem to bear this out.

System designers also need to consider that flash devices have unknown performance characteristics in the enterprise. The cells that are used to store the bits in NAND flash can only be rewritten a fixed numbers of times. This hasn’t typically been a problem in the consumer space, where the duty cycle can be as low as 0.2 or 0.5 percent. While flash memory vendors are addressing this issue with write leveling algorithms and other, more innovative approaches, we still do not know how these durability characteristics will impact performance in the enterprise, where duty cycles can be 100 times greater.

So, where does flash fit in HPC? First, there are the obvious density and power advantages that could have a big impact in the viability of putting a specific system into a specific facility. Also, many vendors are thinking in terms of using flash-based SSDs to replace spinning disks used for scratch space on high performance computers. This approach gives each of a system’s processors much faster access to data during computations, when time is of the essence. Being able to read data so much faster could be key to enabling the growing class of data-intensive applications. This could also make, for example, traditional application checkpoint/restart viable on a larger class of systems than currently viable today.

But there are other places that flash-based memory devices might have an even bigger impact in HPC. David Flynn, the chief technical officer at flash memory component maker Fusion-io, thinks in terms of balance, and in particular in terms of how imbalances have driven system designers to compensate.

For Flynn the growing disparity between access times for data on disk versus data closer to the CPU has created “pathologies” in system design and user behavior. He observes that system designers have amassed large amounts of RAM to increase capacities when they could keep data near the CPU, and have amassed large numbers of disk spindles in complex parallel filesystems to improve bandwidth when data had to be moved to or from secondary storage. He also sees the scale out datacenter as a symptom of data access disparity: rather than plugging lots of RAM and disk into single systems, many smaller systems are aggregated to accomplish the same thing.

“But the most pernicious pathology,” Flynn says, “occurs when application specialists spend hours tuning applications to effectively manage data flow. Inevitably, this leads to very brittle applications that have to be re-tuned when moving from one system to another.”

Flynn was formerly the chief architect at Linux Networx, and says that his experience in HPC has led him to conclude that “balanced systems lead to cost effective throughput.” Fusion-io’s device connects to the PCI Express bus, and Flynn conceptualizes the flash memory as sitting between memory and disk, relieving the performance pressure on both, and creating a new first-class participant in the data flow hierarchy.

“You can put 15 Fusion-io cards in a commodity server and get 10 GB/s of throughput from a 10 TB flash pool with over one million IOPS of performance,” says Flynn. How does this matter? He gave NASTRAN as a customer example, in which jobs that took three days to run would complete in six hours on the same system and with no change in the application after the installation of the flash device.

Despite the promise of faster performance for less power, there are still significant hurdles to be cleared before flash-based SSDs achieve broad deployment in either enterprise or supercomputing datacenters. The read/write disparity needs to be addressed in a way that doesn’t compromise the current power advantages of flash, and questions of durability and reliability with the high duty cycles of enterprise-grade equipment still need to be addressed.

But one thing we have seen in HPC over the past 20 years is that volume wins, and the forces driving the volume adoption of flash-based storage in the consumer market aren’t slowing down. As prices continue to fall, HPC vendors are going to be increasingly motivated to come up with new ways to build value on this consumer platform and make it a better fit for serious computing. This could mean some important advantages for users desperate for better performance from their data hierarchy.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is built to run artificial intelligence (AI) workloads and, as Read more…

By Tiffany Trader

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Leading Solution Providers

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This