Back to the Future: Solid-State Storage in Cloud Computing

By Steve Campbell

May 28, 2010

Data continues to grow at an alarming rate across organizations of all sizes and it’s not just new data that’s growing; nothing is thrown away. Growth in data is also fueled by the popularity of portal, search, media, and e-commerce sites, such as Amazon, Yahoo, eBay, and the exponential growth in social networking sites, such as Facebook and Twitter. In the enterprise, data is growing as companies make better use of business analytics and as traditional high-performance computing (HPC) runs an increasing number of data-intensive complex simulations. Not only is data growing, but the way it is accessed is also changing. Data analysis, for example, has transitioned from a traditional, batch mode, reporting-style to an ad-hoc, on-demand real-time access model. The former is well supported by sequential scans over large volumes of data but the latter requires random I/O, which is difficult to support with existing storage infrastructures.

The size of data stores and content on major Web sites is estimated to be quadrupling every 18 months and the number of queries per terabyte is doubling every 18 months. Servers have grown significantly in capabilities and performance to meet the computing needs. Storage, however, has kept up with the demand for the volumes of data but not with speed of access to this data.

The growth in servers, storage, and networking devices in the datacenter continues to push the envelope in space, power, cooling, and management. Storage performance of hard drives has not kept pace with server performance or networking bandwidths. The result is an imbalance of computing resources resulting in I/O bottlenecks, under-utilized servers and over-provisioning of storage leading to excessive costs and storage sprawl.

The I/O bottleneck is not a new phenomenon; it did not happen overnight and has been a problem for decades. In the first half of the 1980’s, Cray Research introduced an optional Solid State Storage Device (SSD) for the Cray X-MP. There were several reasons for this move, one of which, and perhaps the most important, was the ability to stage critical application data/files, a form of tiering, to reduce the CPU wait time. The result was better CPU utilization and significant improvement in time-to-solution for performance-starved applications in the petroleum, aerospace, automotive, chemistry, and nuclear codes. Connected to the Cray X-MP/4 through two high-speed channels, the SSD, in conjunction with the X-MP/4, enabled users to exploit existing applications and develop new algorithms to solve larger and more sophisticated science and engineering problems. System performance was significantly enhanced using this SSD device by eliminating the CPU wait time, resulting in better CPU utilization, better performance, and better-cost performance.

The SSDs for the Cray X-MP were custom built and addressed the problem of providing high I/O bandwidth for long vectors which do not cache well in the processor caches. These early SSDs therefore remained exclusive and expensive. However, with the industry acceptance and growth of standards based architecture, specifically x86, the computing paradigm began to shift. The general-purpose nature of x86-based architecture with their hierarchical cache architecture is easily served by the I/O subsystem based on standard hard-drive technology. Flash technology is maturing and SSDs are on the rise, In addition, the shift of workloads to an ad-hoc, on-demand access, driving the need for random I/O, which puts electro-mechanical hard-drives under severe strain but is effectively serviced by SSDs.

During this same period, several system integrators worked with solid-state storage devices and developed application specific solutions, for example seismic processing, to boost overall system throughput by eliminating the I/O bottleneck. As the decade closed, the market began to evolve with products becoming available for PCs to large-scale UNIX servers.

The last two decades have seen tremendous changes in solid-state technology, which has been fueled by the huge growth in mobile consumer devices such as MP3 players, digital cameras, media devices, and mobile phones, all the way to multi-terabyte SSD. Industry analyst firms IDC and Gartner are watching and predicting growth in the SSD technology segment and in particular, SSD growth for enterprise-class computing.

What is Driving the SSD Growth?

There are several drivers behind the growth and adoption of SSD technology:

  • Poor performance of HDD technologies creating performance gap.
     
  • Flash technology evolution, specifically NAND flash as capacity increases and price declines, improving the economics versus HDD.
     
  • Reliability of flash, growing use of single-level cell (SLC) flash for enterprise grade reliability.
     
  • Aggressive availability of multi-core processor technology.
     
  • Growing awareness of the power environment and constraints in the datacenter as a result if storage sprawl.
     
  • New data-intensive workloads access data randomly.

Performance Gap

Today there is a growing performance gap between the microprocessor and HDD storage devices. Over the past two decades, processing performance and networking performance have significantly increased compared to HDD performance. This has created a gap between processing and network and the I/O available through HDDs. To help compensate for this, IT managers typically add more external HDD devices and DRAM to help speed up throughput. Increasing DRAM enables systems to store working sets in memory to avoid disk latency, and adding additional HDDs can increase throughput by enabling I/O operations to be performed in parallel e.g. by striping across RAIDed HDDs. This helps to bridge the performance gap but creates an expensive and difficult to manage environment together with the increased power, rack space requirement and higher TCO.

Enterprise servers, running applications in the datacenter range from Web 2.0 to HPC to business analytics, can generate hundreds of thousands of random I/O operations per second (IOPS). In these environments, the HDDs available today can only perform thousands of IOPS combined. HDDs are great for capacity and large blocks of sequential data but are not very good at delivering small pieces of random data at a high IOPS rate. The physical characteristics and power envelope of the HDD make it an expensive option for increasing application throughput. Consequently, the CPUs are under-utilized as they wait for data.

Solid-state storage devices based on flash memory are poised to disrupt the industry. Solid-state storage delivers a performance boost compared to HDD, closing the I/O gap between microprocessor and storage; flash brings Moore’s Law to storage versus Newton’s Law. Moore’s Law describes the historical trend of computing hardware as doubling the number of transistors every two years. The capabilities of many digital electronic devices are linked to Moore’s law; processing speed doubling every two years or the number and size of pixels in a digital camera. Unlike HDD, solid-state storage will track Moore’s Law. The CPU no longer waits for data resulting in improved time-to-solution for performance-starved applications. Users will experience not only better time-to-solution but also reduced rack space, less power and increased server utilization all leading to improved TCO. Furthermore, system reliability is improved, as the solid-state storage has no moving parts. By incorporating flash technology as a new storage hierarchy will dramatically reduce the CPU-to-storage bottleneck. This new storage hierarchy smoothens the performance disparity in today’s existing hierarchy, e.g., DRAM, Disk, and Tape.

There is no shortage of SSD solutions; most today are based on disk format interfaced with SATA, SAS or Fiber Channel (FC). Today HDD storage is connected as “direct-attached storage” or “network-attached storage. No matter the connection, “direct-attached storage” is closest to the CPU and delivers both price ($/GB) and performance advantages; the same applies to SSD. SSDs directly connected can deliver 10x the performance of network-attached drives. SSDs based on the PCIe interface deliver the highest performance and lowest latency of all SSD interfaces. PCIe-based SSDs boost performance by 10x or more compared to SAS or FC-based SSDs. In other words, 100x the performance of network-attached storage is possible. Such high-end performance creates the opportunity for a new “Tier-0” in the storage hierarchy, delivering high-bandwidth and low latency to accelerate high-performance workloads. The Tier-0 is an optimized storage tier specifically for high performance workloads that benefit from using flash memory and PCIe interconnect. The future of PCIe attached SSD in the enterprise is predicted to be the strongest growth within this technology market segment.

Flash Technology

Developed in the 1980’s, flash technology is low-cost, non-volatile computer memory that can be erased and reprogrammed electronically. Most people are familiar with some form of commercialized flash device as it is now commonly used in cameras, music players, and cell phones. Advances in technology are now making it a strong storage device for the enterprise that can help fill the performance gap. A growing number of enterprises are using or evaluating flash SSD for “Tier-0” data storage for a couple of reasons — bandwidth and latency access to data, IOPS per watt, IOPs per dollar.

The use of NAND flash technology in SSDs is commonplace with more than 100 vendors offering SSD products. Beware, as not all NAND flash is created equal. NAND flash is available in two technologies: single-level cell (SLC) and multi-level cell (MLC). MLC stores 2-bits or more in a single cell compared to 1-bit per cell for SLC. MLC is higher density and lower cost than SLC. MLC is common in consumer devices such as MP3 players, cameras, mobile phones, and USB thumb drives. SLC, on the other hand, is faster and more reliable making it ideal for enterprise datacenters.

What about Reliability?

Matching the performance and capacity to the user requirement and the right solution is key. Nevertheless, there is more to it than performance, capacity, and price. A key difference between SLC compared to MLC is the higher write cycle durability of SLC. SLC is 100K writes per cell versus MLC writes are limited to 5-10K per cell. For enterprise-class applications, this is significant difference and advantage. SLC will deliver 10x better reliability and lifetime use at lower cost of ownership. Enterprise environments demand a 24×7 environment with large IOPS throughput. An MLC based solution for true enterprise computing would need to replace every few months to keep up with the demanding IOPS and 24×7 reliability, thus increasing the cost of ownership. SLC is right answer for enterprise performance and reliability.

Impact of Microprocessor Architectures

Without a change to the storage architecture and technology, the server I/O performance gap will continue to widen. With the availability of multi-core x86 processors from Intel and AMD, the gap widens even further. These advanced microprocessors deliver high clock rates and more cores each year. Intel’s Nehalem EX, today, has four cores per processor and with a four-socket server, makes a very potent high performance platform. Intel’s next generation processor microarchitecture, Sandy Bridge, will be available sometime in 2011. Sandy Bridge will be built on Intel’s 32-nanometer technology and will no doubt have more cores; more performance, higher speed, and PCIe interconnect slots, thus making this an ideal platform for Tier-0 storage based on advanced PCIe form factor delivering high bandwidth, low latency, and high reliability resulting in a high performance throughput Tier-0 storage.

There are numerous providers of SAS or SATA SSD technology; most storage vendors have an SSD offering. Essentially this is a replacement for HDD drives and will, in most cases, deliver increased performance, smaller footprint, higher transfer rates and improved IOPs. SSD go only so far in solving the I/O bottleneck problem as they are connected to the server via slow interconnects. On the other hand, PCIe SSD devices deliver the highest I/O performance possible. The PCIe world includes vendors such as Fusion-IO, Texas Memory and emerging companies such as Virident Systems.

SSDs are Not Created Equal

While it is true that SSDs deliver much higher performance than HDDs, not all SSDs deliver the same level of IOPS and with any degree of predictability. The current lot of SSDs shows high performance in the early stages of use but it deteriorates depending on the workload (e.g., concurrent reads and writes, large number of I/O requests) and how filled the drive is. The software drivers of these SSDs are the key here as they manage the flash, specifically Wear Leveling and Garbage Collection.

Wear leveling ensures that writes to flash are spread out over all the cells available. This is required due to the limited number of write cycles of flash, 100K for SLC and 5 – 10K for MLC. Garbage Collection, on the other hand, deals with an inherent property of flash, which requires writes to be preceded by erasure of a large block of the flash. Flash does not support in-place writes like memory. SSD drivers therefore have to juggle flash blocks behind the scenes to fulfill I/O requests from applications while collecting flash blocks marked for erasure. This is done by reserving or over-provisioning flash for garbage collection e.g., a 100GB SSD may actually use 150GB of “raw” flash capacity, giving the driver 50GB of scratch capacity to manage garbage collection. These characteristics of flash entail a “flash translation layer” (FTL), which presents a standard block device view of the device to the application while moving physical blocks around for wear leveling and garbage collection.

The measure of goodness of an SSD then becomes how its driver manages flash while delivering steady, predictable IOPS with a minimal reserve of flash and using as little of system resources (CPU cycles, system memory) as possible.

Enterprise Solid-State Devices – Tier-0

Solid-state devices based on Flash and PCIe are emerging as a new class of enterprise storage option – Tier-0. Tier-0 is an optimized storage tier specifically for high performance workloads, which can benefit the most from using flash memory. By implementing a Tier-0 solution, specific data sets can be moved to higher performance, flash memory based storage platforms, resulting in dramatic improvements in application throughput. Access to data in Tier-0 is at near memory speeds and is focused on making applications run faster and more predictably. Fast read and write performance of NAND flash, ever reducing price points, very low power consumption, and increasing level of reliability are foundations for this disruptive solution to the performance-starved workloads of the datacenter.

Applications that are running on current multicore, multisocket servers will no longer be starved for performance by slow HDD storage subsystems. PCIe-based SSD Tier-0 will rebalance the servers and storage creating an optimized solution to solve the I/O bottleneck experienced today.

This Tier-0 storage will provide users with several capabilities including sustained predictable performance for life time of the product, optimized for enterprise-class reliability so that data is never lost, and field upgradeability that does not need replacement of PCIe cards. Finally, a Tier-0 solution needs to be affordable. While flash memory is more expensive per gigabyte than HDD, flash memory costs are decreasing significantly year over year. As electricity costs continue to increase and flash prices decrease, the relative cost per gigabyte and cost for IOPS of flash is continually improving. Flash out-performs hard drives by at least and order of magnitude resulting in the cost per gigabyte of Tier-0 flash being extremely attractive.

In conclusion, all the pieces are in place for a true enterprise-class SSD-based Tier-0 storage hierarchy:

  • High performance multicore, multisocket servers.
     
  • PCIe form factor delivering the highest bandwidth possible and lowest latency.
     
  • SLC NAND flash for high performance, sustained life-time performance.
     
  • SLC NAND flash delivering true enterprise-class five 9’s reliability and field serviceability.
     
  • Sophisticated and transparent software for garbage collection and Wear leveling.

It is also important to point out that superior technology, by itself, does not guarantee success. The winners of flash-based solutions as broad Tier-0 storage will be those vendors who can provide all the performance benefits of enterprise-class flash, while plugging into existing storage infrastructures and usage models to deliver the same reliability and manageability that users have come to expect from established enterprise storage solutions.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Google Gets First Dibs on New Skylake Chips

February 27, 2017

As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. Read more…

By George Leopold

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Extreme Performance Solutions

Manufacturers Reaping the Benefits of Remote Visualization

Today’s manufacturers are operating in an ever-changing atmosphere, and finding new ways to boost productivity has never been more vital.

This is why manufacturers are ramping up their investments in high performance computing (HPC), a trend which has helped give rise to the “connected factory” and Industrial Internet of Things (IIoT) concepts that are proliferating throughout the industry today. Read more…

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This