Solid State Drives: Change in a Flash

By John West

March 26, 2009

There has been a lot of interest among the enterprise datacenter crowd lately in a relatively old technology: solid state drives (SSDs). Today’s flash drives are faster and cheaper than their predecessors, and are almost certain to assume a place in the standard enterprise IT architect’s toolkit. But it seems that they have quite a bit of potential in HPC too, though not (just) in the way you might think.

When I showed up at my first HPC gig in the early 1990s, our Crays had solid state disks, and they weren’t even close to new then; semiconductor memory-based SSDs date back to the 1970s and 80s. But they were expensive, and they didn’t really have a place in the commodity-driven economics of the commodity-processor supercomputers that emerged beginning in the mid-90s.

So what is an SSD? The Wikipedia entry says:

A solid-state drive (SSD) is a data storage device that uses solid-state memory to store persistent data. An SSD emulates a hard disk drive interface, thus easily replacing it in most applications.

Michael Cornwell, lead technologist for flash memory technology at Sun, has a similar definition in few words, “An SSD is a non-rotating device that emulates what a disk drive does.”

The reason for the renewed interest in this old idea comes down to money. The new generation of SSDs are being built from NAND flash components, the kind of nonvolatile memory used in everything from USB memory drives to cameras and iPods. Driven by the demand in the consumer market, SSD prices have dropped considerably. You can see this effect for yourself when you head down to Best Buy and find that you can buy a 4 GB flash drive for less than $15.00. Just a few years ago, that amount of flash memory would have cost you hundreds of dollars.

This demand also caused the flash memory industry to leapfrog the DRAM industry in terms of the size of the silicon process used to create the chips. David Flynn, chief technical officer and co-founder of Fusion-io, explains that all of this has come together to make NAND flash a very attractive storage option. “Flash memory costs less per bit (than DRAM), doesn’t put off heat, and you can stack it vertically into packages and then stack the packages,” putting a lot of bits in a very small space.

Flash-based SSDs have many inherent advantages over spinning disks for storage that make them attractive to system architects. In addition to being dense and relatively cool there are no moving parts and, unlike hard disk drives, flash-based SSDs can support between 10 and 20 operations at the same time, making them inherently parallel devices. Flash storage also typically has at least three orders of magnitude lower latency than traditional spinning drives (microseconds versus milliseconds).

Sun’s Cornwell says that, as an example, Sun’s recently announced SSD offers “thousands of IOPS, which is much greater than the 300 or so you can get from traditional hard disk drives.” SSDs also offer substantial power savings, consuming an order of magnitude less power than hard disk drives.

Sounds great, so let’s pull out all the disks and replace them with SSDs, right? Not so fast, says Jimmy Daley, HP’s Industry Standard Server (ISS) marketing manager. First of all, cost is an issue. While flash-based SSDs are much faster than traditional spinning disks, they are also “an order of magnitude or two more expensive per GB than disk.”

There are also other issues, like the disparity of read and write speeds. For example, Cornwell says Sun’s SSD solution achieves 35,000 IOPS on read, but only 3,300 on write — a big difference that may need to be considered, depending upon your application. On the other hand, Flynn maintains his company’s ioDrive keeps write throughput within shouting distance of its read performance and recent tests by Tom’s Hardware seem to bear this out.

System designers also need to consider that flash devices have unknown performance characteristics in the enterprise. The cells that are used to store the bits in NAND flash can only be rewritten a fixed numbers of times. This hasn’t typically been a problem in the consumer space, where the duty cycle can be as low as 0.2 or 0.5 percent. While flash memory vendors are addressing this issue with write leveling algorithms and other, more innovative approaches, we still do not know how these durability characteristics will impact performance in the enterprise, where duty cycles can be 100 times greater.

So, where does flash fit in HPC? First, there are the obvious density and power advantages that could have a big impact in the viability of putting a specific system into a specific facility. Also, many vendors are thinking in terms of using flash-based SSDs to replace spinning disks used for scratch space on high performance computers. This approach gives each of a system’s processors much faster access to data during computations, when time is of the essence. Being able to read data so much faster could be key to enabling the growing class of data-intensive applications. This could also make, for example, traditional application checkpoint/restart viable on a larger class of systems than currently viable today.

But there are other places that flash-based memory devices might have an even bigger impact in HPC. David Flynn, the chief technical officer at flash memory component maker Fusion-io, thinks in terms of balance, and in particular in terms of how imbalances have driven system designers to compensate.

For Flynn the growing disparity between access times for data on disk versus data closer to the CPU has created “pathologies” in system design and user behavior. He observes that system designers have amassed large amounts of RAM to increase capacities when they could keep data near the CPU, and have amassed large numbers of disk spindles in complex parallel filesystems to improve bandwidth when data had to be moved to or from secondary storage. He also sees the scale out datacenter as a symptom of data access disparity: rather than plugging lots of RAM and disk into single systems, many smaller systems are aggregated to accomplish the same thing.

“But the most pernicious pathology,” Flynn says, “occurs when application specialists spend hours tuning applications to effectively manage data flow. Inevitably, this leads to very brittle applications that have to be re-tuned when moving from one system to another.”

Flynn was formerly the chief architect at Linux Networx, and says that his experience in HPC has led him to conclude that “balanced systems lead to cost effective throughput.” Fusion-io’s device connects to the PCI Express bus, and Flynn conceptualizes the flash memory as sitting between memory and disk, relieving the performance pressure on both, and creating a new first-class participant in the data flow hierarchy.

“You can put 15 Fusion-io cards in a commodity server and get 10 GB/s of throughput from a 10 TB flash pool with over one million IOPS of performance,” says Flynn. How does this matter? He gave NASTRAN as a customer example, in which jobs that took three days to run would complete in six hours on the same system and with no change in the application after the installation of the flash device.

Despite the promise of faster performance for less power, there are still significant hurdles to be cleared before flash-based SSDs achieve broad deployment in either enterprise or supercomputing datacenters. The read/write disparity needs to be addressed in a way that doesn’t compromise the current power advantages of flash, and questions of durability and reliability with the high duty cycles of enterprise-grade equipment still need to be addressed.

But one thing we have seen in HPC over the past 20 years is that volume wins, and the forces driving the volume adoption of flash-based storage in the consumer market aren’t slowing down. As prices continue to fall, HPC vendors are going to be increasingly motivated to come up with new ways to build value on this consumer platform and make it a better fit for serious computing. This could mean some important advantages for users desperate for better performance from their data hierarchy.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 1 Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understandi Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This