Back to the Future: Solid-State Storage in Cloud Computing

By Steve Campbell

May 28, 2010

Data continues to grow at an alarming rate across organizations of all sizes and it’s not just new data that’s growing; nothing is thrown away. Growth in data is also fueled by the popularity of portal, search, media, and e-commerce sites, such as Amazon, Yahoo, eBay, and the exponential growth in social networking sites, such as Facebook and Twitter. In the enterprise, data is growing as companies make better use of business analytics and as traditional high-performance computing (HPC) runs an increasing number of data-intensive complex simulations. Not only is data growing, but the way it is accessed is also changing. Data analysis, for example, has transitioned from a traditional, batch mode, reporting-style to an ad-hoc, on-demand real-time access model. The former is well supported by sequential scans over large volumes of data but the latter requires random I/O, which is difficult to support with existing storage infrastructures.

The size of data stores and content on major Web sites is estimated to be quadrupling every 18 months and the number of queries per terabyte is doubling every 18 months. Servers have grown significantly in capabilities and performance to meet the computing needs. Storage, however, has kept up with the demand for the volumes of data but not with speed of access to this data.

The growth in servers, storage, and networking devices in the datacenter continues to push the envelope in space, power, cooling, and management. Storage performance of hard drives has not kept pace with server performance or networking bandwidths. The result is an imbalance of computing resources resulting in I/O bottlenecks, under-utilized servers and over-provisioning of storage leading to excessive costs and storage sprawl.

The I/O bottleneck is not a new phenomenon; it did not happen overnight and has been a problem for decades. In the first half of the 1980’s, Cray Research introduced an optional Solid State Storage Device (SSD) for the Cray X-MP. There were several reasons for this move, one of which, and perhaps the most important, was the ability to stage critical application data/files, a form of tiering, to reduce the CPU wait time. The result was better CPU utilization and significant improvement in time-to-solution for performance-starved applications in the petroleum, aerospace, automotive, chemistry, and nuclear codes. Connected to the Cray X-MP/4 through two high-speed channels, the SSD, in conjunction with the X-MP/4, enabled users to exploit existing applications and develop new algorithms to solve larger and more sophisticated science and engineering problems. System performance was significantly enhanced using this SSD device by eliminating the CPU wait time, resulting in better CPU utilization, better performance, and better-cost performance.

The SSDs for the Cray X-MP were custom built and addressed the problem of providing high I/O bandwidth for long vectors which do not cache well in the processor caches. These early SSDs therefore remained exclusive and expensive. However, with the industry acceptance and growth of standards based architecture, specifically x86, the computing paradigm began to shift. The general-purpose nature of x86-based architecture with their hierarchical cache architecture is easily served by the I/O subsystem based on standard hard-drive technology. Flash technology is maturing and SSDs are on the rise, In addition, the shift of workloads to an ad-hoc, on-demand access, driving the need for random I/O, which puts electro-mechanical hard-drives under severe strain but is effectively serviced by SSDs.

During this same period, several system integrators worked with solid-state storage devices and developed application specific solutions, for example seismic processing, to boost overall system throughput by eliminating the I/O bottleneck. As the decade closed, the market began to evolve with products becoming available for PCs to large-scale UNIX servers.

The last two decades have seen tremendous changes in solid-state technology, which has been fueled by the huge growth in mobile consumer devices such as MP3 players, digital cameras, media devices, and mobile phones, all the way to multi-terabyte SSD. Industry analyst firms IDC and Gartner are watching and predicting growth in the SSD technology segment and in particular, SSD growth for enterprise-class computing.

What is Driving the SSD Growth?

There are several drivers behind the growth and adoption of SSD technology:

  • Poor performance of HDD technologies creating performance gap.
     
  • Flash technology evolution, specifically NAND flash as capacity increases and price declines, improving the economics versus HDD.
     
  • Reliability of flash, growing use of single-level cell (SLC) flash for enterprise grade reliability.
     
  • Aggressive availability of multi-core processor technology.
     
  • Growing awareness of the power environment and constraints in the datacenter as a result if storage sprawl.
     
  • New data-intensive workloads access data randomly.

Performance Gap

Today there is a growing performance gap between the microprocessor and HDD storage devices. Over the past two decades, processing performance and networking performance have significantly increased compared to HDD performance. This has created a gap between processing and network and the I/O available through HDDs. To help compensate for this, IT managers typically add more external HDD devices and DRAM to help speed up throughput. Increasing DRAM enables systems to store working sets in memory to avoid disk latency, and adding additional HDDs can increase throughput by enabling I/O operations to be performed in parallel e.g. by striping across RAIDed HDDs. This helps to bridge the performance gap but creates an expensive and difficult to manage environment together with the increased power, rack space requirement and higher TCO.

Enterprise servers, running applications in the datacenter range from Web 2.0 to HPC to business analytics, can generate hundreds of thousands of random I/O operations per second (IOPS). In these environments, the HDDs available today can only perform thousands of IOPS combined. HDDs are great for capacity and large blocks of sequential data but are not very good at delivering small pieces of random data at a high IOPS rate. The physical characteristics and power envelope of the HDD make it an expensive option for increasing application throughput. Consequently, the CPUs are under-utilized as they wait for data.

Solid-state storage devices based on flash memory are poised to disrupt the industry. Solid-state storage delivers a performance boost compared to HDD, closing the I/O gap between microprocessor and storage; flash brings Moore’s Law to storage versus Newton’s Law. Moore’s Law describes the historical trend of computing hardware as doubling the number of transistors every two years. The capabilities of many digital electronic devices are linked to Moore’s law; processing speed doubling every two years or the number and size of pixels in a digital camera. Unlike HDD, solid-state storage will track Moore’s Law. The CPU no longer waits for data resulting in improved time-to-solution for performance-starved applications. Users will experience not only better time-to-solution but also reduced rack space, less power and increased server utilization all leading to improved TCO. Furthermore, system reliability is improved, as the solid-state storage has no moving parts. By incorporating flash technology as a new storage hierarchy will dramatically reduce the CPU-to-storage bottleneck. This new storage hierarchy smoothens the performance disparity in today’s existing hierarchy, e.g., DRAM, Disk, and Tape.

There is no shortage of SSD solutions; most today are based on disk format interfaced with SATA, SAS or Fiber Channel (FC). Today HDD storage is connected as “direct-attached storage” or “network-attached storage. No matter the connection, “direct-attached storage” is closest to the CPU and delivers both price ($/GB) and performance advantages; the same applies to SSD. SSDs directly connected can deliver 10x the performance of network-attached drives. SSDs based on the PCIe interface deliver the highest performance and lowest latency of all SSD interfaces. PCIe-based SSDs boost performance by 10x or more compared to SAS or FC-based SSDs. In other words, 100x the performance of network-attached storage is possible. Such high-end performance creates the opportunity for a new “Tier-0” in the storage hierarchy, delivering high-bandwidth and low latency to accelerate high-performance workloads. The Tier-0 is an optimized storage tier specifically for high performance workloads that benefit from using flash memory and PCIe interconnect. The future of PCIe attached SSD in the enterprise is predicted to be the strongest growth within this technology market segment.

Flash Technology

Developed in the 1980’s, flash technology is low-cost, non-volatile computer memory that can be erased and reprogrammed electronically. Most people are familiar with some form of commercialized flash device as it is now commonly used in cameras, music players, and cell phones. Advances in technology are now making it a strong storage device for the enterprise that can help fill the performance gap. A growing number of enterprises are using or evaluating flash SSD for “Tier-0” data storage for a couple of reasons — bandwidth and latency access to data, IOPS per watt, IOPs per dollar.

The use of NAND flash technology in SSDs is commonplace with more than 100 vendors offering SSD products. Beware, as not all NAND flash is created equal. NAND flash is available in two technologies: single-level cell (SLC) and multi-level cell (MLC). MLC stores 2-bits or more in a single cell compared to 1-bit per cell for SLC. MLC is higher density and lower cost than SLC. MLC is common in consumer devices such as MP3 players, cameras, mobile phones, and USB thumb drives. SLC, on the other hand, is faster and more reliable making it ideal for enterprise datacenters.

What about Reliability?

Matching the performance and capacity to the user requirement and the right solution is key. Nevertheless, there is more to it than performance, capacity, and price. A key difference between SLC compared to MLC is the higher write cycle durability of SLC. SLC is 100K writes per cell versus MLC writes are limited to 5-10K per cell. For enterprise-class applications, this is significant difference and advantage. SLC will deliver 10x better reliability and lifetime use at lower cost of ownership. Enterprise environments demand a 24×7 environment with large IOPS throughput. An MLC based solution for true enterprise computing would need to replace every few months to keep up with the demanding IOPS and 24×7 reliability, thus increasing the cost of ownership. SLC is right answer for enterprise performance and reliability.

Impact of Microprocessor Architectures

Without a change to the storage architecture and technology, the server I/O performance gap will continue to widen. With the availability of multi-core x86 processors from Intel and AMD, the gap widens even further. These advanced microprocessors deliver high clock rates and more cores each year. Intel’s Nehalem EX, today, has four cores per processor and with a four-socket server, makes a very potent high performance platform. Intel’s next generation processor microarchitecture, Sandy Bridge, will be available sometime in 2011. Sandy Bridge will be built on Intel’s 32-nanometer technology and will no doubt have more cores; more performance, higher speed, and PCIe interconnect slots, thus making this an ideal platform for Tier-0 storage based on advanced PCIe form factor delivering high bandwidth, low latency, and high reliability resulting in a high performance throughput Tier-0 storage.

There are numerous providers of SAS or SATA SSD technology; most storage vendors have an SSD offering. Essentially this is a replacement for HDD drives and will, in most cases, deliver increased performance, smaller footprint, higher transfer rates and improved IOPs. SSD go only so far in solving the I/O bottleneck problem as they are connected to the server via slow interconnects. On the other hand, PCIe SSD devices deliver the highest I/O performance possible. The PCIe world includes vendors such as Fusion-IO, Texas Memory and emerging companies such as Virident Systems.

SSDs are Not Created Equal

While it is true that SSDs deliver much higher performance than HDDs, not all SSDs deliver the same level of IOPS and with any degree of predictability. The current lot of SSDs shows high performance in the early stages of use but it deteriorates depending on the workload (e.g., concurrent reads and writes, large number of I/O requests) and how filled the drive is. The software drivers of these SSDs are the key here as they manage the flash, specifically Wear Leveling and Garbage Collection.

Wear leveling ensures that writes to flash are spread out over all the cells available. This is required due to the limited number of write cycles of flash, 100K for SLC and 5 – 10K for MLC. Garbage Collection, on the other hand, deals with an inherent property of flash, which requires writes to be preceded by erasure of a large block of the flash. Flash does not support in-place writes like memory. SSD drivers therefore have to juggle flash blocks behind the scenes to fulfill I/O requests from applications while collecting flash blocks marked for erasure. This is done by reserving or over-provisioning flash for garbage collection e.g., a 100GB SSD may actually use 150GB of “raw” flash capacity, giving the driver 50GB of scratch capacity to manage garbage collection. These characteristics of flash entail a “flash translation layer” (FTL), which presents a standard block device view of the device to the application while moving physical blocks around for wear leveling and garbage collection.

The measure of goodness of an SSD then becomes how its driver manages flash while delivering steady, predictable IOPS with a minimal reserve of flash and using as little of system resources (CPU cycles, system memory) as possible.

Enterprise Solid-State Devices – Tier-0

Solid-state devices based on Flash and PCIe are emerging as a new class of enterprise storage option – Tier-0. Tier-0 is an optimized storage tier specifically for high performance workloads, which can benefit the most from using flash memory. By implementing a Tier-0 solution, specific data sets can be moved to higher performance, flash memory based storage platforms, resulting in dramatic improvements in application throughput. Access to data in Tier-0 is at near memory speeds and is focused on making applications run faster and more predictably. Fast read and write performance of NAND flash, ever reducing price points, very low power consumption, and increasing level of reliability are foundations for this disruptive solution to the performance-starved workloads of the datacenter.

Applications that are running on current multicore, multisocket servers will no longer be starved for performance by slow HDD storage subsystems. PCIe-based SSD Tier-0 will rebalance the servers and storage creating an optimized solution to solve the I/O bottleneck experienced today.

This Tier-0 storage will provide users with several capabilities including sustained predictable performance for life time of the product, optimized for enterprise-class reliability so that data is never lost, and field upgradeability that does not need replacement of PCIe cards. Finally, a Tier-0 solution needs to be affordable. While flash memory is more expensive per gigabyte than HDD, flash memory costs are decreasing significantly year over year. As electricity costs continue to increase and flash prices decrease, the relative cost per gigabyte and cost for IOPS of flash is continually improving. Flash out-performs hard drives by at least and order of magnitude resulting in the cost per gigabyte of Tier-0 flash being extremely attractive.

In conclusion, all the pieces are in place for a true enterprise-class SSD-based Tier-0 storage hierarchy:

  • High performance multicore, multisocket servers.
     
  • PCIe form factor delivering the highest bandwidth possible and lowest latency.
     
  • SLC NAND flash for high performance, sustained life-time performance.
     
  • SLC NAND flash delivering true enterprise-class five 9’s reliability and field serviceability.
     
  • Sophisticated and transparent software for garbage collection and Wear leveling.

It is also important to point out that superior technology, by itself, does not guarantee success. The winners of flash-based solutions as broad Tier-0 storage will be those vendors who can provide all the performance benefits of enterprise-class flash, while plugging into existing storage infrastructures and usage models to deliver the same reliability and manageability that users have come to expect from established enterprise storage solutions.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputers Take to the Solar Winds

June 5, 2020

The whims of the solar winds – charged particles flowing from the Sun’s atmosphere – can interfere with systems that are now crucial for modern life, such as satellites and GPS services – but these winds can be d Read more…

By Oliver Peckham

HPC in O&G: Deep Sea Drilling – What Happens Now   

June 4, 2020

At the beginning of March I attended the Rice Oil & Gas HPC conference in Houston. That seems a long time ago now. It’s a great event where oil and gas specialists join with compute veterans and the discussion tell Read more…

By Rosemary Francis

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCSA’s next generation of supercomputers post-Blue Waters,” Read more…

By John Russell

Dell Integrates Bitfusion for vHPC, GPU ‘Pools’

June 3, 2020

Dell Technologies advanced its hardware virtualization strategy to AI workloads this week with the introduction of capabilities aimed at expanding access to GPU and HPC services via its EMC, VMware and recently acquired Read more…

By George Leopold

Supercomputers Streamline Prediction of Dangerous Arrhythmia

June 2, 2020

Heart arrhythmia can prove deadly, contributing to the hundreds of thousands of deaths from cardiac arrest in the U.S. every year. Unfortunately, many of those arrhythmia are induced as side effects from various medicati Read more…

By Staff report

AWS Solution Channel

Join AWS, Univa and Intel for This Informative Session!

Event Date: June 18, 2020

More enterprises than ever are turning to HPC cloud computing. Whether you’re just getting started, or more mature in your use of cloud, this HPC Cloud webinar is an excellent opportunity to gain valuable insights and knowledge to help accelerate your HPC cloud projects. Read more…

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of computing capability in support of data analysis and AI workload Read more…

By Tiffany Trader

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCS Read more…

By John Russell

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of comp Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This