Back to the Future: Solid-State Storage in Cloud Computing

By Steve Campbell

May 28, 2010

Data continues to grow at an alarming rate across organizations of all sizes and it’s not just new data that’s growing; nothing is thrown away. Growth in data is also fueled by the popularity of portal, search, media, and e-commerce sites, such as Amazon, Yahoo, eBay, and the exponential growth in social networking sites, such as Facebook and Twitter. In the enterprise, data is growing as companies make better use of business analytics and as traditional high-performance computing (HPC) runs an increasing number of data-intensive complex simulations. Not only is data growing, but the way it is accessed is also changing. Data analysis, for example, has transitioned from a traditional, batch mode, reporting-style to an ad-hoc, on-demand real-time access model. The former is well supported by sequential scans over large volumes of data but the latter requires random I/O, which is difficult to support with existing storage infrastructures.

The size of data stores and content on major Web sites is estimated to be quadrupling every 18 months and the number of queries per terabyte is doubling every 18 months. Servers have grown significantly in capabilities and performance to meet the computing needs. Storage, however, has kept up with the demand for the volumes of data but not with speed of access to this data.

The growth in servers, storage, and networking devices in the datacenter continues to push the envelope in space, power, cooling, and management. Storage performance of hard drives has not kept pace with server performance or networking bandwidths. The result is an imbalance of computing resources resulting in I/O bottlenecks, under-utilized servers and over-provisioning of storage leading to excessive costs and storage sprawl.

The I/O bottleneck is not a new phenomenon; it did not happen overnight and has been a problem for decades. In the first half of the 1980’s, Cray Research introduced an optional Solid State Storage Device (SSD) for the Cray X-MP. There were several reasons for this move, one of which, and perhaps the most important, was the ability to stage critical application data/files, a form of tiering, to reduce the CPU wait time. The result was better CPU utilization and significant improvement in time-to-solution for performance-starved applications in the petroleum, aerospace, automotive, chemistry, and nuclear codes. Connected to the Cray X-MP/4 through two high-speed channels, the SSD, in conjunction with the X-MP/4, enabled users to exploit existing applications and develop new algorithms to solve larger and more sophisticated science and engineering problems. System performance was significantly enhanced using this SSD device by eliminating the CPU wait time, resulting in better CPU utilization, better performance, and better-cost performance.

The SSDs for the Cray X-MP were custom built and addressed the problem of providing high I/O bandwidth for long vectors which do not cache well in the processor caches. These early SSDs therefore remained exclusive and expensive. However, with the industry acceptance and growth of standards based architecture, specifically x86, the computing paradigm began to shift. The general-purpose nature of x86-based architecture with their hierarchical cache architecture is easily served by the I/O subsystem based on standard hard-drive technology. Flash technology is maturing and SSDs are on the rise, In addition, the shift of workloads to an ad-hoc, on-demand access, driving the need for random I/O, which puts electro-mechanical hard-drives under severe strain but is effectively serviced by SSDs.

During this same period, several system integrators worked with solid-state storage devices and developed application specific solutions, for example seismic processing, to boost overall system throughput by eliminating the I/O bottleneck. As the decade closed, the market began to evolve with products becoming available for PCs to large-scale UNIX servers.

The last two decades have seen tremendous changes in solid-state technology, which has been fueled by the huge growth in mobile consumer devices such as MP3 players, digital cameras, media devices, and mobile phones, all the way to multi-terabyte SSD. Industry analyst firms IDC and Gartner are watching and predicting growth in the SSD technology segment and in particular, SSD growth for enterprise-class computing.

What is Driving the SSD Growth?

There are several drivers behind the growth and adoption of SSD technology:

  • Poor performance of HDD technologies creating performance gap.
     
  • Flash technology evolution, specifically NAND flash as capacity increases and price declines, improving the economics versus HDD.
     
  • Reliability of flash, growing use of single-level cell (SLC) flash for enterprise grade reliability.
     
  • Aggressive availability of multi-core processor technology.
     
  • Growing awareness of the power environment and constraints in the datacenter as a result if storage sprawl.
     
  • New data-intensive workloads access data randomly.

Performance Gap

Today there is a growing performance gap between the microprocessor and HDD storage devices. Over the past two decades, processing performance and networking performance have significantly increased compared to HDD performance. This has created a gap between processing and network and the I/O available through HDDs. To help compensate for this, IT managers typically add more external HDD devices and DRAM to help speed up throughput. Increasing DRAM enables systems to store working sets in memory to avoid disk latency, and adding additional HDDs can increase throughput by enabling I/O operations to be performed in parallel e.g. by striping across RAIDed HDDs. This helps to bridge the performance gap but creates an expensive and difficult to manage environment together with the increased power, rack space requirement and higher TCO.

Enterprise servers, running applications in the datacenter range from Web 2.0 to HPC to business analytics, can generate hundreds of thousands of random I/O operations per second (IOPS). In these environments, the HDDs available today can only perform thousands of IOPS combined. HDDs are great for capacity and large blocks of sequential data but are not very good at delivering small pieces of random data at a high IOPS rate. The physical characteristics and power envelope of the HDD make it an expensive option for increasing application throughput. Consequently, the CPUs are under-utilized as they wait for data.

Solid-state storage devices based on flash memory are poised to disrupt the industry. Solid-state storage delivers a performance boost compared to HDD, closing the I/O gap between microprocessor and storage; flash brings Moore’s Law to storage versus Newton’s Law. Moore’s Law describes the historical trend of computing hardware as doubling the number of transistors every two years. The capabilities of many digital electronic devices are linked to Moore’s law; processing speed doubling every two years or the number and size of pixels in a digital camera. Unlike HDD, solid-state storage will track Moore’s Law. The CPU no longer waits for data resulting in improved time-to-solution for performance-starved applications. Users will experience not only better time-to-solution but also reduced rack space, less power and increased server utilization all leading to improved TCO. Furthermore, system reliability is improved, as the solid-state storage has no moving parts. By incorporating flash technology as a new storage hierarchy will dramatically reduce the CPU-to-storage bottleneck. This new storage hierarchy smoothens the performance disparity in today’s existing hierarchy, e.g., DRAM, Disk, and Tape.

There is no shortage of SSD solutions; most today are based on disk format interfaced with SATA, SAS or Fiber Channel (FC). Today HDD storage is connected as “direct-attached storage” or “network-attached storage. No matter the connection, “direct-attached storage” is closest to the CPU and delivers both price ($/GB) and performance advantages; the same applies to SSD. SSDs directly connected can deliver 10x the performance of network-attached drives. SSDs based on the PCIe interface deliver the highest performance and lowest latency of all SSD interfaces. PCIe-based SSDs boost performance by 10x or more compared to SAS or FC-based SSDs. In other words, 100x the performance of network-attached storage is possible. Such high-end performance creates the opportunity for a new “Tier-0” in the storage hierarchy, delivering high-bandwidth and low latency to accelerate high-performance workloads. The Tier-0 is an optimized storage tier specifically for high performance workloads that benefit from using flash memory and PCIe interconnect. The future of PCIe attached SSD in the enterprise is predicted to be the strongest growth within this technology market segment.

Flash Technology

Developed in the 1980’s, flash technology is low-cost, non-volatile computer memory that can be erased and reprogrammed electronically. Most people are familiar with some form of commercialized flash device as it is now commonly used in cameras, music players, and cell phones. Advances in technology are now making it a strong storage device for the enterprise that can help fill the performance gap. A growing number of enterprises are using or evaluating flash SSD for “Tier-0” data storage for a couple of reasons — bandwidth and latency access to data, IOPS per watt, IOPs per dollar.

The use of NAND flash technology in SSDs is commonplace with more than 100 vendors offering SSD products. Beware, as not all NAND flash is created equal. NAND flash is available in two technologies: single-level cell (SLC) and multi-level cell (MLC). MLC stores 2-bits or more in a single cell compared to 1-bit per cell for SLC. MLC is higher density and lower cost than SLC. MLC is common in consumer devices such as MP3 players, cameras, mobile phones, and USB thumb drives. SLC, on the other hand, is faster and more reliable making it ideal for enterprise datacenters.

What about Reliability?

Matching the performance and capacity to the user requirement and the right solution is key. Nevertheless, there is more to it than performance, capacity, and price. A key difference between SLC compared to MLC is the higher write cycle durability of SLC. SLC is 100K writes per cell versus MLC writes are limited to 5-10K per cell. For enterprise-class applications, this is significant difference and advantage. SLC will deliver 10x better reliability and lifetime use at lower cost of ownership. Enterprise environments demand a 24×7 environment with large IOPS throughput. An MLC based solution for true enterprise computing would need to replace every few months to keep up with the demanding IOPS and 24×7 reliability, thus increasing the cost of ownership. SLC is right answer for enterprise performance and reliability.

Impact of Microprocessor Architectures

Without a change to the storage architecture and technology, the server I/O performance gap will continue to widen. With the availability of multi-core x86 processors from Intel and AMD, the gap widens even further. These advanced microprocessors deliver high clock rates and more cores each year. Intel’s Nehalem EX, today, has four cores per processor and with a four-socket server, makes a very potent high performance platform. Intel’s next generation processor microarchitecture, Sandy Bridge, will be available sometime in 2011. Sandy Bridge will be built on Intel’s 32-nanometer technology and will no doubt have more cores; more performance, higher speed, and PCIe interconnect slots, thus making this an ideal platform for Tier-0 storage based on advanced PCIe form factor delivering high bandwidth, low latency, and high reliability resulting in a high performance throughput Tier-0 storage.

There are numerous providers of SAS or SATA SSD technology; most storage vendors have an SSD offering. Essentially this is a replacement for HDD drives and will, in most cases, deliver increased performance, smaller footprint, higher transfer rates and improved IOPs. SSD go only so far in solving the I/O bottleneck problem as they are connected to the server via slow interconnects. On the other hand, PCIe SSD devices deliver the highest I/O performance possible. The PCIe world includes vendors such as Fusion-IO, Texas Memory and emerging companies such as Virident Systems.

SSDs are Not Created Equal

While it is true that SSDs deliver much higher performance than HDDs, not all SSDs deliver the same level of IOPS and with any degree of predictability. The current lot of SSDs shows high performance in the early stages of use but it deteriorates depending on the workload (e.g., concurrent reads and writes, large number of I/O requests) and how filled the drive is. The software drivers of these SSDs are the key here as they manage the flash, specifically Wear Leveling and Garbage Collection.

Wear leveling ensures that writes to flash are spread out over all the cells available. This is required due to the limited number of write cycles of flash, 100K for SLC and 5 – 10K for MLC. Garbage Collection, on the other hand, deals with an inherent property of flash, which requires writes to be preceded by erasure of a large block of the flash. Flash does not support in-place writes like memory. SSD drivers therefore have to juggle flash blocks behind the scenes to fulfill I/O requests from applications while collecting flash blocks marked for erasure. This is done by reserving or over-provisioning flash for garbage collection e.g., a 100GB SSD may actually use 150GB of “raw” flash capacity, giving the driver 50GB of scratch capacity to manage garbage collection. These characteristics of flash entail a “flash translation layer” (FTL), which presents a standard block device view of the device to the application while moving physical blocks around for wear leveling and garbage collection.

The measure of goodness of an SSD then becomes how its driver manages flash while delivering steady, predictable IOPS with a minimal reserve of flash and using as little of system resources (CPU cycles, system memory) as possible.

Enterprise Solid-State Devices – Tier-0

Solid-state devices based on Flash and PCIe are emerging as a new class of enterprise storage option – Tier-0. Tier-0 is an optimized storage tier specifically for high performance workloads, which can benefit the most from using flash memory. By implementing a Tier-0 solution, specific data sets can be moved to higher performance, flash memory based storage platforms, resulting in dramatic improvements in application throughput. Access to data in Tier-0 is at near memory speeds and is focused on making applications run faster and more predictably. Fast read and write performance of NAND flash, ever reducing price points, very low power consumption, and increasing level of reliability are foundations for this disruptive solution to the performance-starved workloads of the datacenter.

Applications that are running on current multicore, multisocket servers will no longer be starved for performance by slow HDD storage subsystems. PCIe-based SSD Tier-0 will rebalance the servers and storage creating an optimized solution to solve the I/O bottleneck experienced today.

This Tier-0 storage will provide users with several capabilities including sustained predictable performance for life time of the product, optimized for enterprise-class reliability so that data is never lost, and field upgradeability that does not need replacement of PCIe cards. Finally, a Tier-0 solution needs to be affordable. While flash memory is more expensive per gigabyte than HDD, flash memory costs are decreasing significantly year over year. As electricity costs continue to increase and flash prices decrease, the relative cost per gigabyte and cost for IOPS of flash is continually improving. Flash out-performs hard drives by at least and order of magnitude resulting in the cost per gigabyte of Tier-0 flash being extremely attractive.

In conclusion, all the pieces are in place for a true enterprise-class SSD-based Tier-0 storage hierarchy:

  • High performance multicore, multisocket servers.
     
  • PCIe form factor delivering the highest bandwidth possible and lowest latency.
     
  • SLC NAND flash for high performance, sustained life-time performance.
     
  • SLC NAND flash delivering true enterprise-class five 9’s reliability and field serviceability.
     
  • Sophisticated and transparent software for garbage collection and Wear leveling.

It is also important to point out that superior technology, by itself, does not guarantee success. The winners of flash-based solutions as broad Tier-0 storage will be those vendors who can provide all the performance benefits of enterprise-class flash, while plugging into existing storage infrastructures and usage models to deliver the same reliability and manageability that users have come to expect from established enterprise storage solutions.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Embraces FPGAs, ‘Elastic’ GPUs

December 2, 2016

A new instance type rolled out this week by Amazon Web Services is based on customizable field programmable gate arrays that promise to strike a balance between performance and cost as emerging workloads create requirements often unmet by general-purpose processors. Read more…

By George Leopold

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 1, 2016)

December 1, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPC Career Notes (Dec. 2016)

December 1, 2016

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high performance computing community. Read more…

By Thomas Ayres

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

IBM and NSF Computing Pioneer Erich Bloch Dies at 91

November 30, 2016

Erich Bloch, a computational pioneer whose competitive zeal and commercial bent helped transform the National Science Foundation while he was its director, died last Friday at age 91. Bloch was a productive force to be reckoned. During his long stint at IBM prior to joining NSF Bloch spearheaded development of the “Stretch” supercomputer and IBM’s phenomenally successful System/360. Read more…

By John Russell

Pioneering Programmers Awarded Presidential Medal of Freedom

November 30, 2016

In an awards ceremony on November 22, President Barack Obama recognized 21 recipients with the Presidential Medal of Freedom, the Nation’s highest civilian honor. Read more…

By Tiffany Trader

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Intel Details AI Hardware Strategy for Post-GPU Age

November 21, 2016

Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level. Read more…

By Alex Woodie

SC Says Farewell to Salt Lake City, See You in Denver

November 18, 2016

After an intense four-day flurry of activity (and a cold snap that brought some actual snow flurries), the SC16 show floor closed yesterday (Thursday) and the always-extensive technical program wound down today. Read more…

By Tiffany Trader

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

HPE Gobbles SGI for Larger Slice of $11B HPC Pie

August 11, 2016

Hewlett Packard Enterprise (HPE) announced today that it will acquire rival HPC server maker SGI for $7.75 per share, or about $275 million, inclusive of cash and debt. The deal ends the seven-year reprieve that kept the SGI banner flying after Rackable Systems purchased the bankrupt Silicon Graphics Inc. for $25 million in 2009 and assumed the SGI brand. Bringing SGI into its fold bolsters HPE's high-performance computing and data analytics capabilities and expands its position... Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Leading Solution Providers

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Share This