Small TACC Cluster Set to Shatter IOPS Ceiling

By Nicole Hemsoth

October 18, 2013

The Texas Advanced Computing Center (TACC) has been in the habit of spinning up some rather interesting machines these days, including the hybrid Stampede system. In early 2015, the center will be home to another notable resource—Wrangler, a data analysis and management cluster aimed at aiding the data-intensive need of the open science community.

Taking its place along Stampede’s side in the space that’s been left open from the retired Ranger machine, the new NSF-supported “big data” driven system will provide TACC and the communities it caters to with a Hadoop-ready Dell-supplied 120 node cluster. But that’s not the real story here; what sets this apart is the anticipated high performance NAND flash side, supplied by the (still stealth) company, DSSD.

According to one of the PIs on the new system, Chris Jordan, their high performance NAND tier that is set to deliver one terabyte per second and a whopping 275 million IOPS.

The key to that kind of remarkable storage performance is coming from the technology provided by DSSD. Chances are, you haven’t heard of them before unless you follow news about the trajectory of Sun co-founder Andy Bechtolsheim’s career. His startup, DSSD, has been in development mode for well over three years and is the subject of a number of patents, although there are still no public customers or definitive products. As Jordan described when asked how they came to acquire their NAND flash products, their job at TACC is to keep an eye on emerging technologies and they’re “well connected” with Bechtolsheim and other companies on the edge of offering products publicly.

Of the patents in question from DSSD (there are three that could be found) one seems most promising (again, we note that TACC’s Chris Jordan was unable to give us any detail—this is speculation) there is one for a storage system with “guaranteed read latency” filed in 2012 from DSSD developed by William H. Moore, Jeffrey S. Bonwick. Here, they describe “A method for writing data to persistent storage. The method includes receiving a first request to write a first datum to persistent storage including NAND dies, identifying a first NAND die in which to write a first copy of the first datum and a second NAND die in which to write a second copy, generating a second request to write the first copy of the first datum to the first NAND die and a third request to write the second copy to the second NAND die, and waiting until the first NAND die and second NAND die not are busy. Based on a determination that the first NAND die and the second NAND die are not busy: issuing the second request to the first NAND die, and issuing the third request to the second NAND die after the second request is complete.”

Again, we weren’t able to get any details, but we should note on a related front, Jordan says that the compute environment is what TACC defines as “embedded processing” which on a configuration level, is different than a typical Linux cluster setup with a large number of compute nodes and a separate storage subsystem with its own servers strung together with a high performance interconnect. Rather, in this case, storage will be closer to everything so that for the most part, users won’t go through an intermediate server to get to their data. This means fewer hops on the network between users and their data, which leads to higher performance and lower latency data access than what they might see with more horsepower-driven machine like Stampede.

Jordan tells us that Dell and DSSD are distinct, separate partners on the project and that while the NAND component wasn’t the sole basis for hardware decisions in general, it was a “very exciting part” of the initial concept. He noted that there are no special or custom Dell components for the system, but they did “work very closely” with Dell to achieve the desired result.

The Wrangler system will be the product of a $6 million NSF grant, which if you take some not-so-wild guesses, means that 120 nodes and some human support (the continuing support grant of another $6 million will be funded separately) equals quite a bit left over to fund this NAND storage effort.

Outside of the flashy side of the story, there are a few other elements worth noting. First, the system will be powered by 32 Haswell cores per node and while there are no hard, verified numbers to support the performance, we’ll be staying tuned to see how these early processors crunch some of the big data analytics problems the XSEDE and other scientific communities throw Haswell’s way. Further, to support the anticipated data-intensive workloads, they’ve made some noteworthy decisions on the memory front, adding 4 GB of RAM per core (versus 2 GB in a standard cluster) to lend an overall 128 GB of RAM to support faster storage access across the memory subsystem. Wrangler will also be able to rope in both 40 GbE and InfiniBand.

Additionally, this is one of a growing number of forays into the Hadoop and MapReduce space by a major research institution. TACC isn’t the first to install a Hadoop cluster, but according to Jordan, this cluster will likely grow—both in terms of additional nodes and the people required to support. Jordan told us that while at this point they’re using the native Apache Hadoop implementation, they haven’t ruled out the use of one of the commercial distributions (as offered by companies like Cloudera, MapR and Hortworks, for example).

Of the Hadoop, storage and processing environments, Jordan says that there were two real drivers for the design choices. First, he points to an increase in the overall need for a wider array of data analytics applications, which includes Hadoop and MapReduce type application, but also a host of other statistical and data mining tools as well as basic database applications. He says that while a traditional cluster environment can do all of those things, it’s far from optimal.

Additionally, he points to a growing class of persistent services for collecting, sharing and even analyzing data that are used by communities or large projects. These need to be available and accessible to cater to serve a cloud-based set of users. “Web users and web-based services are becoming a fundamental part of research in a way they haven’t been in the past,” he said, pointing to XSEDE and other projects, including domain-specific ones like iPlant, which serves as a science web application where users upload, share and analyze data or build their own VMs to run custom applications.

In addition to the system components we’ve already described, there will be two ten petabyte disk installations, one of which will be on site with the other at Indiana University, where it serve as an identical high capacity replicated storage resource.

We’ll catch up with TACC and hopefully DSSD at SC13 in Denver this year to see what we else we can learn.

Editor’s Note–

In an earlier version of this article we referenced a comparison between the IOPS numbers of the TACC system with Blue Waters IOPS numbers that we derived from a Data Direct Networks statement. These were related to the storage subsystem and were not a valid reference for comparison. Notes from NCSA below..

The article “Tiny TACC Cluster Set to Shatter IOPS Ceiling” included erroneous information about the Blue Waters system at NCSA.
Blue Waters does not have user-accessible flash storage. Blue Waters does have an online disk subsystem made up entirely of Sonexion storage units with 26 usable petabytes and performance greater than 1TB/s.
Blue Waters also has a 300+ usable petabyte nearline tape sub-system.
The 1.4 million IOPS value described in the article is the vendor quoted peak performance of a single DDN SFA12K storage unit that is a single component (1 of multiple) used to accelerate data access for the near-line tape subsystem and does not reflect the full performance of Blue Waters.
The timeframes of the technologies discussed are separated by approximately five years, with Blue Waters installed and completely in service, and Wrangler projected to be installed in 2015.
HPCwire regrets the erroneous information in the original version of the article.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

Supercomputer-Powered Climate Model Makes Startling Sea Level Rise Prediction

April 19, 2021

The climate science community is tasked with striking a difficult balance: inspiring precisely the amount of alarm commensurate to the climate crisis. Make estimates that are too conservative, and the public might not re Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the DL pie a user can finally enj Read more…

AWS Solution Channel

Research computing with RONIN on AWS

To allow more visibility into and management of Amazon Web Services (AWS) resources and expenses and minimize the cloud skills training required to operate these resources, AWS Partner RONIN created the RONIN research computing platform. Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the larg Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computing. Nvidia is pitching the DPU as an active engine... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire