Visit additional Tabor Communication Publications
March 26, 2009
There has been a lot of interest among the enterprise datacenter crowd lately in a relatively old technology: solid state drives (SSDs). Today's flash drives are faster and cheaper than their predecessors, and are almost certain to assume a place in the standard enterprise IT architect's toolkit. But it seems that they have quite a bit of potential in HPC too, though not (just) in the way you might think.
When I showed up at my first HPC gig in the early 1990s, our Crays had solid state disks, and they weren't even close to new then; semiconductor memory-based SSDs date back to the 1970s and 80s. But they were expensive, and they didn't really have a place in the commodity-driven economics of the commodity-processor supercomputers that emerged beginning in the mid-90s.
So what is an SSD? The Wikipedia entry says:
A solid-state drive (SSD) is a data storage device that uses solid-state memory to store persistent data. An SSD emulates a hard disk drive interface, thus easily replacing it in most applications.
Michael Cornwell, lead technologist for flash memory technology at Sun, has a similar definition in few words, "An SSD is a non-rotating device that emulates what a disk drive does."
The reason for the renewed interest in this old idea comes down to money. The new generation of SSDs are being built from NAND flash components, the kind of nonvolatile memory used in everything from USB memory drives to cameras and iPods. Driven by the demand in the consumer market, SSD prices have dropped considerably. You can see this effect for yourself when you head down to Best Buy and find that you can buy a 4 GB flash drive for less than $15.00. Just a few years ago, that amount of flash memory would have cost you hundreds of dollars.
This demand also caused the flash memory industry to leapfrog the DRAM industry in terms of the size of the silicon process used to create the chips. David Flynn, chief technical officer and co-founder of Fusion-io, explains that all of this has come together to make NAND flash a very attractive storage option. "Flash memory costs less per bit (than DRAM), doesn't put off heat, and you can stack it vertically into packages and then stack the packages," putting a lot of bits in a very small space.
Flash-based SSDs have many inherent advantages over spinning disks for storage that make them attractive to system architects. In addition to being dense and relatively cool there are no moving parts and, unlike hard disk drives, flash-based SSDs can support between 10 and 20 operations at the same time, making them inherently parallel devices. Flash storage also typically has at least three orders of magnitude lower latency than traditional spinning drives (microseconds versus milliseconds).
Sun's Cornwell says that, as an example, Sun's recently announced SSD offers "thousands of IOPS, which is much greater than the 300 or so you can get from traditional hard disk drives." SSDs also offer substantial power savings, consuming an order of magnitude less power than hard disk drives.
Sounds great, so let's pull out all the disks and replace them with SSDs, right? Not so fast, says Jimmy Daley, HP's Industry Standard Server (ISS) marketing manager. First of all, cost is an issue. While flash-based SSDs are much faster than traditional spinning disks, they are also "an order of magnitude or two more expensive per GB than disk."
There are also other issues, like the disparity of read and write speeds. For example, Cornwell says Sun's SSD solution achieves 35,000 IOPS on read, but only 3,300 on write -- a big difference that may need to be considered, depending upon your application. On the other hand, Flynn maintains his company's ioDrive keeps write throughput within shouting distance of its read performance and recent tests by Tom's Hardware seem to bear this out.
System designers also need to consider that flash devices have unknown performance characteristics in the enterprise. The cells that are used to store the bits in NAND flash can only be rewritten a fixed numbers of times. This hasn't typically been a problem in the consumer space, where the duty cycle can be as low as 0.2 or 0.5 percent. While flash memory vendors are addressing this issue with write leveling algorithms and other, more innovative approaches, we still do not know how these durability characteristics will impact performance in the enterprise, where duty cycles can be 100 times greater.
So, where does flash fit in HPC? First, there are the obvious density and power advantages that could have a big impact in the viability of putting a specific system into a specific facility. Also, many vendors are thinking in terms of using flash-based SSDs to replace spinning disks used for scratch space on high performance computers. This approach gives each of a system's processors much faster access to data during computations, when time is of the essence. Being able to read data so much faster could be key to enabling the growing class of data-intensive applications. This could also make, for example, traditional application checkpoint/restart viable on a larger class of systems than currently viable today.
But there are other places that flash-based memory devices might have an even bigger impact in HPC. David Flynn, the chief technical officer at flash memory component maker Fusion-io, thinks in terms of balance, and in particular in terms of how imbalances have driven system designers to compensate.
For Flynn the growing disparity between access times for data on disk versus data closer to the CPU has created "pathologies" in system design and user behavior. He observes that system designers have amassed large amounts of RAM to increase capacities when they could keep data near the CPU, and have amassed large numbers of disk spindles in complex parallel filesystems to improve bandwidth when data had to be moved to or from secondary storage. He also sees the scale out datacenter as a symptom of data access disparity: rather than plugging lots of RAM and disk into single systems, many smaller systems are aggregated to accomplish the same thing.
"But the most pernicious pathology," Flynn says, "occurs when application specialists spend hours tuning applications to effectively manage data flow. Inevitably, this leads to very brittle applications that have to be re-tuned when moving from one system to another."
Flynn was formerly the chief architect at Linux Networx, and says that his experience in HPC has led him to conclude that "balanced systems lead to cost effective throughput." Fusion-io's device connects to the PCI Express bus, and Flynn conceptualizes the flash memory as sitting between memory and disk, relieving the performance pressure on both, and creating a new first-class participant in the data flow hierarchy.
"You can put 15 Fusion-io cards in a commodity server and get 10 GB/s of throughput from a 10 TB flash pool with over one million IOPS of performance," says Flynn. How does this matter? He gave NASTRAN as a customer example, in which jobs that took three days to run would complete in six hours on the same system and with no change in the application after the installation of the flash device.
Despite the promise of faster performance for less power, there are still significant hurdles to be cleared before flash-based SSDs achieve broad deployment in either enterprise or supercomputing datacenters. The read/write disparity needs to be addressed in a way that doesn't compromise the current power advantages of flash, and questions of durability and reliability with the high duty cycles of enterprise-grade equipment still need to be addressed.
But one thing we have seen in HPC over the past 20 years is that volume wins, and the forces driving the volume adoption of flash-based storage in the consumer market aren't slowing down. As prices continue to fall, HPC vendors are going to be increasingly motivated to come up with new ways to build value on this consumer platform and make it a better fit for serious computing. This could mean some important advantages for users desperate for better performance from their data hierarchy.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.