Visit additional Tabor Communication Publications
March 26, 2009
There has been a lot of interest among the enterprise datacenter crowd lately in a relatively old technology: solid state drives (SSDs). Today's flash drives are faster and cheaper than their predecessors, and are almost certain to assume a place in the standard enterprise IT architect's toolkit. But it seems that they have quite a bit of potential in HPC too, though not (just) in the way you might think.
When I showed up at my first HPC gig in the early 1990s, our Crays had solid state disks, and they weren't even close to new then; semiconductor memory-based SSDs date back to the 1970s and 80s. But they were expensive, and they didn't really have a place in the commodity-driven economics of the commodity-processor supercomputers that emerged beginning in the mid-90s.
So what is an SSD? The Wikipedia entry says:
A solid-state drive (SSD) is a data storage device that uses solid-state memory to store persistent data. An SSD emulates a hard disk drive interface, thus easily replacing it in most applications.
Michael Cornwell, lead technologist for flash memory technology at Sun, has a similar definition in few words, "An SSD is a non-rotating device that emulates what a disk drive does."
The reason for the renewed interest in this old idea comes down to money. The new generation of SSDs are being built from NAND flash components, the kind of nonvolatile memory used in everything from USB memory drives to cameras and iPods. Driven by the demand in the consumer market, SSD prices have dropped considerably. You can see this effect for yourself when you head down to Best Buy and find that you can buy a 4 GB flash drive for less than $15.00. Just a few years ago, that amount of flash memory would have cost you hundreds of dollars.
This demand also caused the flash memory industry to leapfrog the DRAM industry in terms of the size of the silicon process used to create the chips. David Flynn, chief technical officer and co-founder of Fusion-io, explains that all of this has come together to make NAND flash a very attractive storage option. "Flash memory costs less per bit (than DRAM), doesn't put off heat, and you can stack it vertically into packages and then stack the packages," putting a lot of bits in a very small space.
Flash-based SSDs have many inherent advantages over spinning disks for storage that make them attractive to system architects. In addition to being dense and relatively cool there are no moving parts and, unlike hard disk drives, flash-based SSDs can support between 10 and 20 operations at the same time, making them inherently parallel devices. Flash storage also typically has at least three orders of magnitude lower latency than traditional spinning drives (microseconds versus milliseconds).
Sun's Cornwell says that, as an example, Sun's recently announced SSD offers "thousands of IOPS, which is much greater than the 300 or so you can get from traditional hard disk drives." SSDs also offer substantial power savings, consuming an order of magnitude less power than hard disk drives.
Sounds great, so let's pull out all the disks and replace them with SSDs, right? Not so fast, says Jimmy Daley, HP's Industry Standard Server (ISS) marketing manager. First of all, cost is an issue. While flash-based SSDs are much faster than traditional spinning disks, they are also "an order of magnitude or two more expensive per GB than disk."
There are also other issues, like the disparity of read and write speeds. For example, Cornwell says Sun's SSD solution achieves 35,000 IOPS on read, but only 3,300 on write -- a big difference that may need to be considered, depending upon your application. On the other hand, Flynn maintains his company's ioDrive keeps write throughput within shouting distance of its read performance and recent tests by Tom's Hardware seem to bear this out.
System designers also need to consider that flash devices have unknown performance characteristics in the enterprise. The cells that are used to store the bits in NAND flash can only be rewritten a fixed numbers of times. This hasn't typically been a problem in the consumer space, where the duty cycle can be as low as 0.2 or 0.5 percent. While flash memory vendors are addressing this issue with write leveling algorithms and other, more innovative approaches, we still do not know how these durability characteristics will impact performance in the enterprise, where duty cycles can be 100 times greater.
So, where does flash fit in HPC? First, there are the obvious density and power advantages that could have a big impact in the viability of putting a specific system into a specific facility. Also, many vendors are thinking in terms of using flash-based SSDs to replace spinning disks used for scratch space on high performance computers. This approach gives each of a system's processors much faster access to data during computations, when time is of the essence. Being able to read data so much faster could be key to enabling the growing class of data-intensive applications. This could also make, for example, traditional application checkpoint/restart viable on a larger class of systems than currently viable today.
But there are other places that flash-based memory devices might have an even bigger impact in HPC. David Flynn, the chief technical officer at flash memory component maker Fusion-io, thinks in terms of balance, and in particular in terms of how imbalances have driven system designers to compensate.
For Flynn the growing disparity between access times for data on disk versus data closer to the CPU has created "pathologies" in system design and user behavior. He observes that system designers have amassed large amounts of RAM to increase capacities when they could keep data near the CPU, and have amassed large numbers of disk spindles in complex parallel filesystems to improve bandwidth when data had to be moved to or from secondary storage. He also sees the scale out datacenter as a symptom of data access disparity: rather than plugging lots of RAM and disk into single systems, many smaller systems are aggregated to accomplish the same thing.
"But the most pernicious pathology," Flynn says, "occurs when application specialists spend hours tuning applications to effectively manage data flow. Inevitably, this leads to very brittle applications that have to be re-tuned when moving from one system to another."
Flynn was formerly the chief architect at Linux Networx, and says that his experience in HPC has led him to conclude that "balanced systems lead to cost effective throughput." Fusion-io's device connects to the PCI Express bus, and Flynn conceptualizes the flash memory as sitting between memory and disk, relieving the performance pressure on both, and creating a new first-class participant in the data flow hierarchy.
"You can put 15 Fusion-io cards in a commodity server and get 10 GB/s of throughput from a 10 TB flash pool with over one million IOPS of performance," says Flynn. How does this matter? He gave NASTRAN as a customer example, in which jobs that took three days to run would complete in six hours on the same system and with no change in the application after the installation of the flash device.
Despite the promise of faster performance for less power, there are still significant hurdles to be cleared before flash-based SSDs achieve broad deployment in either enterprise or supercomputing datacenters. The read/write disparity needs to be addressed in a way that doesn't compromise the current power advantages of flash, and questions of durability and reliability with the high duty cycles of enterprise-grade equipment still need to be addressed.
But one thing we have seen in HPC over the past 20 years is that volume wins, and the forces driving the volume adoption of flash-based storage in the consumer market aren't slowing down. As prices continue to fall, HPC vendors are going to be increasingly motivated to come up with new ways to build value on this consumer platform and make it a better fit for serious computing. This could mean some important advantages for users desperate for better performance from their data hierarchy.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.