Solid State Drives: Change in a Flash

By John West

March 26, 2009

There has been a lot of interest among the enterprise datacenter crowd lately in a relatively old technology: solid state drives (SSDs). Today’s flash drives are faster and cheaper than their predecessors, and are almost certain to assume a place in the standard enterprise IT architect’s toolkit. But it seems that they have quite a bit of potential in HPC too, though not (just) in the way you might think.

When I showed up at my first HPC gig in the early 1990s, our Crays had solid state disks, and they weren’t even close to new then; semiconductor memory-based SSDs date back to the 1970s and 80s. But they were expensive, and they didn’t really have a place in the commodity-driven economics of the commodity-processor supercomputers that emerged beginning in the mid-90s.

So what is an SSD? The Wikipedia entry says:

A solid-state drive (SSD) is a data storage device that uses solid-state memory to store persistent data. An SSD emulates a hard disk drive interface, thus easily replacing it in most applications.

Michael Cornwell, lead technologist for flash memory technology at Sun, has a similar definition in few words, “An SSD is a non-rotating device that emulates what a disk drive does.”

The reason for the renewed interest in this old idea comes down to money. The new generation of SSDs are being built from NAND flash components, the kind of nonvolatile memory used in everything from USB memory drives to cameras and iPods. Driven by the demand in the consumer market, SSD prices have dropped considerably. You can see this effect for yourself when you head down to Best Buy and find that you can buy a 4 GB flash drive for less than $15.00. Just a few years ago, that amount of flash memory would have cost you hundreds of dollars.

This demand also caused the flash memory industry to leapfrog the DRAM industry in terms of the size of the silicon process used to create the chips. David Flynn, chief technical officer and co-founder of Fusion-io, explains that all of this has come together to make NAND flash a very attractive storage option. “Flash memory costs less per bit (than DRAM), doesn’t put off heat, and you can stack it vertically into packages and then stack the packages,” putting a lot of bits in a very small space.

Flash-based SSDs have many inherent advantages over spinning disks for storage that make them attractive to system architects. In addition to being dense and relatively cool there are no moving parts and, unlike hard disk drives, flash-based SSDs can support between 10 and 20 operations at the same time, making them inherently parallel devices. Flash storage also typically has at least three orders of magnitude lower latency than traditional spinning drives (microseconds versus milliseconds).

Sun’s Cornwell says that, as an example, Sun’s recently announced SSD offers “thousands of IOPS, which is much greater than the 300 or so you can get from traditional hard disk drives.” SSDs also offer substantial power savings, consuming an order of magnitude less power than hard disk drives.

Sounds great, so let’s pull out all the disks and replace them with SSDs, right? Not so fast, says Jimmy Daley, HP’s Industry Standard Server (ISS) marketing manager. First of all, cost is an issue. While flash-based SSDs are much faster than traditional spinning disks, they are also “an order of magnitude or two more expensive per GB than disk.”

There are also other issues, like the disparity of read and write speeds. For example, Cornwell says Sun’s SSD solution achieves 35,000 IOPS on read, but only 3,300 on write — a big difference that may need to be considered, depending upon your application. On the other hand, Flynn maintains his company’s ioDrive keeps write throughput within shouting distance of its read performance and recent tests by Tom’s Hardware seem to bear this out.

System designers also need to consider that flash devices have unknown performance characteristics in the enterprise. The cells that are used to store the bits in NAND flash can only be rewritten a fixed numbers of times. This hasn’t typically been a problem in the consumer space, where the duty cycle can be as low as 0.2 or 0.5 percent. While flash memory vendors are addressing this issue with write leveling algorithms and other, more innovative approaches, we still do not know how these durability characteristics will impact performance in the enterprise, where duty cycles can be 100 times greater.

So, where does flash fit in HPC? First, there are the obvious density and power advantages that could have a big impact in the viability of putting a specific system into a specific facility. Also, many vendors are thinking in terms of using flash-based SSDs to replace spinning disks used for scratch space on high performance computers. This approach gives each of a system’s processors much faster access to data during computations, when time is of the essence. Being able to read data so much faster could be key to enabling the growing class of data-intensive applications. This could also make, for example, traditional application checkpoint/restart viable on a larger class of systems than currently viable today.

But there are other places that flash-based memory devices might have an even bigger impact in HPC. David Flynn, the chief technical officer at flash memory component maker Fusion-io, thinks in terms of balance, and in particular in terms of how imbalances have driven system designers to compensate.

For Flynn the growing disparity between access times for data on disk versus data closer to the CPU has created “pathologies” in system design and user behavior. He observes that system designers have amassed large amounts of RAM to increase capacities when they could keep data near the CPU, and have amassed large numbers of disk spindles in complex parallel filesystems to improve bandwidth when data had to be moved to or from secondary storage. He also sees the scale out datacenter as a symptom of data access disparity: rather than plugging lots of RAM and disk into single systems, many smaller systems are aggregated to accomplish the same thing.

“But the most pernicious pathology,” Flynn says, “occurs when application specialists spend hours tuning applications to effectively manage data flow. Inevitably, this leads to very brittle applications that have to be re-tuned when moving from one system to another.”

Flynn was formerly the chief architect at Linux Networx, and says that his experience in HPC has led him to conclude that “balanced systems lead to cost effective throughput.” Fusion-io’s device connects to the PCI Express bus, and Flynn conceptualizes the flash memory as sitting between memory and disk, relieving the performance pressure on both, and creating a new first-class participant in the data flow hierarchy.

“You can put 15 Fusion-io cards in a commodity server and get 10 GB/s of throughput from a 10 TB flash pool with over one million IOPS of performance,” says Flynn. How does this matter? He gave NASTRAN as a customer example, in which jobs that took three days to run would complete in six hours on the same system and with no change in the application after the installation of the flash device.

Despite the promise of faster performance for less power, there are still significant hurdles to be cleared before flash-based SSDs achieve broad deployment in either enterprise or supercomputing datacenters. The read/write disparity needs to be addressed in a way that doesn’t compromise the current power advantages of flash, and questions of durability and reliability with the high duty cycles of enterprise-grade equipment still need to be addressed.

But one thing we have seen in HPC over the past 20 years is that volume wins, and the forces driving the volume adoption of flash-based storage in the consumer market aren’t slowing down. As prices continue to fall, HPC vendors are going to be increasingly motivated to come up with new ways to build value on this consumer platform and make it a better fit for serious computing. This could mean some important advantages for users desperate for better performance from their data hierarchy.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Tuning InfiniBand Interconnects Using Congestion Control

July 26, 2017

InfiniBand is among the most common and well-known cluster interconnect technologies. However, the complexities of an InfiniBand (IB) network can frustrate the most experienced cluster administrators. Maintaining a balan Read more…

By Adam Dorsey

NSF Project Sets Up First Machine Learning Cyberinfrastructure – CHASE-CI

July 25, 2017

Earlier this month, the National Science Foundation issued a $1 million grant to Larry Smarr, director of Calit2, and a group of his colleagues to create a community infrastructure in support of machine learning research Read more…

By John Russell

DARPA Continues Investment in Post-Moore’s Technologies

July 24, 2017

The U.S. military long ago ceded dominance in electronics innovation to Silicon Valley, the DoD-backed powerhouse that has driven microelectronic generation for decades. With Moore's Law clearly running out of steam, the Read more…

By George Leopold

HPE Extreme Performance Solutions

HPE Servers Deliver High Performance Remote Visualization

Whether generating seismic simulations, locating new productive oil reservoirs, or constructing complex models of the earth’s subsurface, energy, oil, and gas (EO&G) is a highly data-driven industry. Read more…

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in 2017 with scale-up production for enterprise datacenters and Read more…

By Tiffany Trader

Tuning InfiniBand Interconnects Using Congestion Control

July 26, 2017

InfiniBand is among the most common and well-known cluster interconnect technologies. However, the complexities of an InfiniBand (IB) network can frustrate the Read more…

By Adam Dorsey

NSF Project Sets Up First Machine Learning Cyberinfrastructure – CHASE-CI

July 25, 2017

Earlier this month, the National Science Foundation issued a $1 million grant to Larry Smarr, director of Calit2, and a group of his colleagues to create a comm Read more…

By John Russell

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Fujitsu Continues HPC, AI Push

July 19, 2017

Summer is well under way, but the so-called summertime slowdown, linked with hot temperatures and longer vacations, does not seem to have impacted Fujitsu's out Read more…

By Tiffany Trader

Researchers Use DNA to Store and Retrieve Digital Movie

July 18, 2017

From abacus to pencil and paper to semiconductor chips, the technology of computing has always been an ever-changing target. The human brain is probably the com Read more…

By John Russell

The Exascale FY18 Budget – The Next Step

July 17, 2017

On July 12, 2017, the U.S. federal budget for its Exascale Computing Initiative (ECI) took its next step forward. On that day, the full Appropriations Committee Read more…

By Alex R. Larzelere

Women in HPC Luncheon Shines Light on Female-Friendly Hiring Practices

July 13, 2017

The second annual Women in HPC luncheon was held on June 20, 2017, during the International Supercomputing Conference in Frankfurt, Germany. The luncheon provid Read more…

By Tiffany Trader

Satellite Advances, NSF Computation Power Rapid Mapping of Earth’s Surface

July 13, 2017

New satellite technologies have completely changed the game in mapping and geographical data gathering, reducing costs and placing a new emphasis on time series Read more…

By Ken Chiacchia and Tiffany Jolley

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This