Back to the Future: Solid-State Storage in Cloud Computing

By Steve Campbell

May 28, 2010

Data continues to grow at an alarming rate across organizations of all sizes and it’s not just new data that’s growing; nothing is thrown away. Growth in data is also fueled by the popularity of portal, search, media, and e-commerce sites, such as Amazon, Yahoo, eBay, and the exponential growth in social networking sites, such as Facebook and Twitter. In the enterprise, data is growing as companies make better use of business analytics and as traditional high-performance computing (HPC) runs an increasing number of data-intensive complex simulations. Not only is data growing, but the way it is accessed is also changing. Data analysis, for example, has transitioned from a traditional, batch mode, reporting-style to an ad-hoc, on-demand real-time access model. The former is well supported by sequential scans over large volumes of data but the latter requires random I/O, which is difficult to support with existing storage infrastructures.

The size of data stores and content on major Web sites is estimated to be quadrupling every 18 months and the number of queries per terabyte is doubling every 18 months. Servers have grown significantly in capabilities and performance to meet the computing needs. Storage, however, has kept up with the demand for the volumes of data but not with speed of access to this data.

The growth in servers, storage, and networking devices in the datacenter continues to push the envelope in space, power, cooling, and management. Storage performance of hard drives has not kept pace with server performance or networking bandwidths. The result is an imbalance of computing resources resulting in I/O bottlenecks, under-utilized servers and over-provisioning of storage leading to excessive costs and storage sprawl.

The I/O bottleneck is not a new phenomenon; it did not happen overnight and has been a problem for decades. In the first half of the 1980’s, Cray Research introduced an optional Solid State Storage Device (SSD) for the Cray X-MP. There were several reasons for this move, one of which, and perhaps the most important, was the ability to stage critical application data/files, a form of tiering, to reduce the CPU wait time. The result was better CPU utilization and significant improvement in time-to-solution for performance-starved applications in the petroleum, aerospace, automotive, chemistry, and nuclear codes. Connected to the Cray X-MP/4 through two high-speed channels, the SSD, in conjunction with the X-MP/4, enabled users to exploit existing applications and develop new algorithms to solve larger and more sophisticated science and engineering problems. System performance was significantly enhanced using this SSD device by eliminating the CPU wait time, resulting in better CPU utilization, better performance, and better-cost performance.

The SSDs for the Cray X-MP were custom built and addressed the problem of providing high I/O bandwidth for long vectors which do not cache well in the processor caches. These early SSDs therefore remained exclusive and expensive. However, with the industry acceptance and growth of standards based architecture, specifically x86, the computing paradigm began to shift. The general-purpose nature of x86-based architecture with their hierarchical cache architecture is easily served by the I/O subsystem based on standard hard-drive technology. Flash technology is maturing and SSDs are on the rise, In addition, the shift of workloads to an ad-hoc, on-demand access, driving the need for random I/O, which puts electro-mechanical hard-drives under severe strain but is effectively serviced by SSDs.

During this same period, several system integrators worked with solid-state storage devices and developed application specific solutions, for example seismic processing, to boost overall system throughput by eliminating the I/O bottleneck. As the decade closed, the market began to evolve with products becoming available for PCs to large-scale UNIX servers.

The last two decades have seen tremendous changes in solid-state technology, which has been fueled by the huge growth in mobile consumer devices such as MP3 players, digital cameras, media devices, and mobile phones, all the way to multi-terabyte SSD. Industry analyst firms IDC and Gartner are watching and predicting growth in the SSD technology segment and in particular, SSD growth for enterprise-class computing.

What is Driving the SSD Growth?

There are several drivers behind the growth and adoption of SSD technology:

  • Poor performance of HDD technologies creating performance gap.
     
  • Flash technology evolution, specifically NAND flash as capacity increases and price declines, improving the economics versus HDD.
     
  • Reliability of flash, growing use of single-level cell (SLC) flash for enterprise grade reliability.
     
  • Aggressive availability of multi-core processor technology.
     
  • Growing awareness of the power environment and constraints in the datacenter as a result if storage sprawl.
     
  • New data-intensive workloads access data randomly.

Performance Gap

Today there is a growing performance gap between the microprocessor and HDD storage devices. Over the past two decades, processing performance and networking performance have significantly increased compared to HDD performance. This has created a gap between processing and network and the I/O available through HDDs. To help compensate for this, IT managers typically add more external HDD devices and DRAM to help speed up throughput. Increasing DRAM enables systems to store working sets in memory to avoid disk latency, and adding additional HDDs can increase throughput by enabling I/O operations to be performed in parallel e.g. by striping across RAIDed HDDs. This helps to bridge the performance gap but creates an expensive and difficult to manage environment together with the increased power, rack space requirement and higher TCO.

Enterprise servers, running applications in the datacenter range from Web 2.0 to HPC to business analytics, can generate hundreds of thousands of random I/O operations per second (IOPS). In these environments, the HDDs available today can only perform thousands of IOPS combined. HDDs are great for capacity and large blocks of sequential data but are not very good at delivering small pieces of random data at a high IOPS rate. The physical characteristics and power envelope of the HDD make it an expensive option for increasing application throughput. Consequently, the CPUs are under-utilized as they wait for data.

Solid-state storage devices based on flash memory are poised to disrupt the industry. Solid-state storage delivers a performance boost compared to HDD, closing the I/O gap between microprocessor and storage; flash brings Moore’s Law to storage versus Newton’s Law. Moore’s Law describes the historical trend of computing hardware as doubling the number of transistors every two years. The capabilities of many digital electronic devices are linked to Moore’s law; processing speed doubling every two years or the number and size of pixels in a digital camera. Unlike HDD, solid-state storage will track Moore’s Law. The CPU no longer waits for data resulting in improved time-to-solution for performance-starved applications. Users will experience not only better time-to-solution but also reduced rack space, less power and increased server utilization all leading to improved TCO. Furthermore, system reliability is improved, as the solid-state storage has no moving parts. By incorporating flash technology as a new storage hierarchy will dramatically reduce the CPU-to-storage bottleneck. This new storage hierarchy smoothens the performance disparity in today’s existing hierarchy, e.g., DRAM, Disk, and Tape.

There is no shortage of SSD solutions; most today are based on disk format interfaced with SATA, SAS or Fiber Channel (FC). Today HDD storage is connected as “direct-attached storage” or “network-attached storage. No matter the connection, “direct-attached storage” is closest to the CPU and delivers both price ($/GB) and performance advantages; the same applies to SSD. SSDs directly connected can deliver 10x the performance of network-attached drives. SSDs based on the PCIe interface deliver the highest performance and lowest latency of all SSD interfaces. PCIe-based SSDs boost performance by 10x or more compared to SAS or FC-based SSDs. In other words, 100x the performance of network-attached storage is possible. Such high-end performance creates the opportunity for a new “Tier-0” in the storage hierarchy, delivering high-bandwidth and low latency to accelerate high-performance workloads. The Tier-0 is an optimized storage tier specifically for high performance workloads that benefit from using flash memory and PCIe interconnect. The future of PCIe attached SSD in the enterprise is predicted to be the strongest growth within this technology market segment.

Flash Technology

Developed in the 1980’s, flash technology is low-cost, non-volatile computer memory that can be erased and reprogrammed electronically. Most people are familiar with some form of commercialized flash device as it is now commonly used in cameras, music players, and cell phones. Advances in technology are now making it a strong storage device for the enterprise that can help fill the performance gap. A growing number of enterprises are using or evaluating flash SSD for “Tier-0” data storage for a couple of reasons — bandwidth and latency access to data, IOPS per watt, IOPs per dollar.

The use of NAND flash technology in SSDs is commonplace with more than 100 vendors offering SSD products. Beware, as not all NAND flash is created equal. NAND flash is available in two technologies: single-level cell (SLC) and multi-level cell (MLC). MLC stores 2-bits or more in a single cell compared to 1-bit per cell for SLC. MLC is higher density and lower cost than SLC. MLC is common in consumer devices such as MP3 players, cameras, mobile phones, and USB thumb drives. SLC, on the other hand, is faster and more reliable making it ideal for enterprise datacenters.

What about Reliability?

Matching the performance and capacity to the user requirement and the right solution is key. Nevertheless, there is more to it than performance, capacity, and price. A key difference between SLC compared to MLC is the higher write cycle durability of SLC. SLC is 100K writes per cell versus MLC writes are limited to 5-10K per cell. For enterprise-class applications, this is significant difference and advantage. SLC will deliver 10x better reliability and lifetime use at lower cost of ownership. Enterprise environments demand a 24×7 environment with large IOPS throughput. An MLC based solution for true enterprise computing would need to replace every few months to keep up with the demanding IOPS and 24×7 reliability, thus increasing the cost of ownership. SLC is right answer for enterprise performance and reliability.

Impact of Microprocessor Architectures

Without a change to the storage architecture and technology, the server I/O performance gap will continue to widen. With the availability of multi-core x86 processors from Intel and AMD, the gap widens even further. These advanced microprocessors deliver high clock rates and more cores each year. Intel’s Nehalem EX, today, has four cores per processor and with a four-socket server, makes a very potent high performance platform. Intel’s next generation processor microarchitecture, Sandy Bridge, will be available sometime in 2011. Sandy Bridge will be built on Intel’s 32-nanometer technology and will no doubt have more cores; more performance, higher speed, and PCIe interconnect slots, thus making this an ideal platform for Tier-0 storage based on advanced PCIe form factor delivering high bandwidth, low latency, and high reliability resulting in a high performance throughput Tier-0 storage.

There are numerous providers of SAS or SATA SSD technology; most storage vendors have an SSD offering. Essentially this is a replacement for HDD drives and will, in most cases, deliver increased performance, smaller footprint, higher transfer rates and improved IOPs. SSD go only so far in solving the I/O bottleneck problem as they are connected to the server via slow interconnects. On the other hand, PCIe SSD devices deliver the highest I/O performance possible. The PCIe world includes vendors such as Fusion-IO, Texas Memory and emerging companies such as Virident Systems.

SSDs are Not Created Equal

While it is true that SSDs deliver much higher performance than HDDs, not all SSDs deliver the same level of IOPS and with any degree of predictability. The current lot of SSDs shows high performance in the early stages of use but it deteriorates depending on the workload (e.g., concurrent reads and writes, large number of I/O requests) and how filled the drive is. The software drivers of these SSDs are the key here as they manage the flash, specifically Wear Leveling and Garbage Collection.

Wear leveling ensures that writes to flash are spread out over all the cells available. This is required due to the limited number of write cycles of flash, 100K for SLC and 5 – 10K for MLC. Garbage Collection, on the other hand, deals with an inherent property of flash, which requires writes to be preceded by erasure of a large block of the flash. Flash does not support in-place writes like memory. SSD drivers therefore have to juggle flash blocks behind the scenes to fulfill I/O requests from applications while collecting flash blocks marked for erasure. This is done by reserving or over-provisioning flash for garbage collection e.g., a 100GB SSD may actually use 150GB of “raw” flash capacity, giving the driver 50GB of scratch capacity to manage garbage collection. These characteristics of flash entail a “flash translation layer” (FTL), which presents a standard block device view of the device to the application while moving physical blocks around for wear leveling and garbage collection.

The measure of goodness of an SSD then becomes how its driver manages flash while delivering steady, predictable IOPS with a minimal reserve of flash and using as little of system resources (CPU cycles, system memory) as possible.

Enterprise Solid-State Devices – Tier-0

Solid-state devices based on Flash and PCIe are emerging as a new class of enterprise storage option – Tier-0. Tier-0 is an optimized storage tier specifically for high performance workloads, which can benefit the most from using flash memory. By implementing a Tier-0 solution, specific data sets can be moved to higher performance, flash memory based storage platforms, resulting in dramatic improvements in application throughput. Access to data in Tier-0 is at near memory speeds and is focused on making applications run faster and more predictably. Fast read and write performance of NAND flash, ever reducing price points, very low power consumption, and increasing level of reliability are foundations for this disruptive solution to the performance-starved workloads of the datacenter.

Applications that are running on current multicore, multisocket servers will no longer be starved for performance by slow HDD storage subsystems. PCIe-based SSD Tier-0 will rebalance the servers and storage creating an optimized solution to solve the I/O bottleneck experienced today.

This Tier-0 storage will provide users with several capabilities including sustained predictable performance for life time of the product, optimized for enterprise-class reliability so that data is never lost, and field upgradeability that does not need replacement of PCIe cards. Finally, a Tier-0 solution needs to be affordable. While flash memory is more expensive per gigabyte than HDD, flash memory costs are decreasing significantly year over year. As electricity costs continue to increase and flash prices decrease, the relative cost per gigabyte and cost for IOPS of flash is continually improving. Flash out-performs hard drives by at least and order of magnitude resulting in the cost per gigabyte of Tier-0 flash being extremely attractive.

In conclusion, all the pieces are in place for a true enterprise-class SSD-based Tier-0 storage hierarchy:

  • High performance multicore, multisocket servers.
     
  • PCIe form factor delivering the highest bandwidth possible and lowest latency.
     
  • SLC NAND flash for high performance, sustained life-time performance.
     
  • SLC NAND flash delivering true enterprise-class five 9’s reliability and field serviceability.
     
  • Sophisticated and transparent software for garbage collection and Wear leveling.

It is also important to point out that superior technology, by itself, does not guarantee success. The winners of flash-based solutions as broad Tier-0 storage will be those vendors who can provide all the performance benefits of enterprise-class flash, while plugging into existing storage infrastructures and usage models to deliver the same reliability and manageability that users have come to expect from established enterprise storage solutions.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire