NSF Official On New Supers, Data-Intensive Future

By Nicole Hemsoth

March 28, 2013

It has been a noteworthy week in the world of scientific and technical computing as two long-awaited supercomputers have been formally revved up for big research action.

The Dell-Intel scientific workhorse, Stampede, at TACC was ushered into the large-scale distributed research fold yesterday. And at the moment of this writing, the rather storied IBM and then Cray-backed Blue Waters system at NCSA is gearing up for its formal intro.

At the heart of both of these systems is some serious monetary backing from the National Science Foundation (NSF), which has committed several million to seeing both supers into the world—no matter how entangled the path. The organization funded the large majority of both projects in the name of furthering some critical human-centered scientific projects related to the environment, gemonics, disaster preparedness and epidemiology.

We chatted earlier this week with Alan Blatecky who directs the NSF’s Division of Advanced Cyberinfrastrcture about where these supers fit into the overarching mission of the NSF–and what the future looks like as applications require systems that are as “big data” ready as they are computationally robust.

Blatecky reiterated that from an NSF standpoint, these are two major investments in HPC, but they aren’t necessarily related in terms of anticipated use or application types. As he told us, the two systems are designed for quite different purposes.

One the one hand, the massive Stampede will cater to a large number of users with an emphasis on boosting the breadth of applications—not to mention extending what those extended apps are able to crunch. Blue Waters, on the other hand, will focus on a much smaller number of users, perhaps as many as a dozen, who have very deep, specific research applications.

While grappling with multiple users across a distributed system like Stampede and its XSEDE base is never simple, there are far more pressing challenges. In addition to pointing to extensive application retooling that needs to happen, especially on Blue Waters, there was one phrase we heard several times–“big data”.

The ability to take advantage of the large number of cores on a machine like Blue Waters is one of the biggest challenges user will face, says Blatecky, who points to how his organization is providing support on the programming and computer science front to aid domain specialist scientists. He said that going forward, the systems that will shine for the “big science” endeavors of the NSF will be those that can strike a balance between being data-intensive systems while retaining the computational power of massive numbers of cores, some of which are being pushed by accelerators and co-processors.

As Blatecky detailed, “Our point of view at the NSF is focused on the broader base of scientific users. We’re interested in the data-intensive computational requirements, which is part of what’s unique about Blue Waters. It has that needed balance between power, memory and storage to address both the data-intensive and computationally-intensive applications.”

When asked about the supercomputing goals the NSF wants to support over the next five years, Blatecky said that the real mission is to support a broader group of scientific users, especially those working in hot applications like genomics, materials science and environmental research areas. Most of their plans revolve around socially-oriented missions, including studies to predict earthquakes, flood outcomes, disaster response situations, and medically-driven research on the HIV and epidemic modeling fronts.

We also talked briefly about how HPC as we know it–and the NSF funds it–could change over the next five years. “I don’t know what it will be,” he noted, but he has no doubt that the performance-driven architectures might not be enough to keep up with the very real data explosion across real science applications unless they strike that memory/storage/power balance that Blue Waters has.

While not all HPC application are necessarily hugely data-intensive, a look down the list of applications reveals some of the highest data volume-driven research areas in science, particularly around medical and earth sciences projects. TACC, for instance, will now be the center of some cutting-edge earthquake, environmental and ecological research as scientists from around the world bring their best and brightest ideas –not to mention an unprecedented level of data–to the common table of the shared resource.

As TACC Director Jay Boisseau stated upon the formal announcement of Stampede yesterday, the system has been “designed to support a large, diverse research community. We are as excited about Stampede’s comprehensive capabilities and its high usability as we are of its tremendous performance.” On that note, 90% of TACC’s new powerhouse will be dedicated to the XSEDE program, which is a unified virtualized system that lets global scientists tap into powerful systems, new data wells and computational tools through one hub.

TACC will tap into the remaining horsepower for larger goals within its own center and in the University of Texas research community. And there is certainly some power to the system. As TACC described cleanly in their own statement on the specs, the Dell and Intel system boasts the following points of pride:

Stampede system components are connected via a fat-tree, FDR InfiniBand interconnect. One hundred and sixty compute racks house compute nodes with dual, eight-core sockets, and feature the new Intel Xeon Phi coprocessors. Additional racks house login, I/O, big-memory, and general hardware management nodes. Each compute node is provisioned with local storage. A high-speed Lustre file system is backed by 76 I/O servers. Stampede also contains 16 large memory nodes, each with 1 TB of RAM and 32 cores, and 128 standard compute nodes, each with an NVIDIA Kepler K20 GPU, giving users access to large shared-memory computing and remote visualization capabilities, respectively. Users will interact with the system via multiple dedicated login servers, and a suite of high-speed data servers. The cluster resource manager for job submission and scheduling will be SLURM (Simple Linux Utility for Resource Management).

Unlike Stampede, which is expected to make a top 5 showing on the Top 500m Blue Waters will not be benchmarking for reasons NCSA’s Bill Kramer explained to us in detail right around SC12. Of course, not that it needs to convince us that it will be a scientific powerhouse..

The Blue Waters saga began back in 2007 when the NSF funded the super to the tune of $208 million. At the time, IBM was at the heart of the project but refunded their payments for Blue Waters system  after looking at the cost versus return equation. Cray was later selected to take over the project with a $188 million contract that would lead the super into completion.

In the year since the video below was filmed, work on the system was completed and Blue Waters was installed at NCSA. The 11.6 petaflops (peak) supercomputer contains 237 XE cabinets, each with 24 blade assemblies, and 32 cabinets of the Cray XK6 supercomputer with NVIDIA Tesla GPU computing capability.

Currently available in “friendly-user” mode for NCSA-approved teams, Blue Waters provides sustained performance of 1 petaflop or more on a range of real-world science and engineering applications.

“Blue Waters is an example of a high-risk, high-reward research infrastructure project that will enable NSF to achieve its mission of funding basic research at the frontiers of science,” said NSF Acting Director Cora Marrett.  ”Its impact on science and engineering discoveries and innovation, as well as on national priorities, such as health, safety and well-being, will be extraordinary.”

What follows are a few examples of the exciting and promising research on Blue Waters (following provided by the National Science Foundation).

Modeling HIV

Blue Waters is enabling Klaus Schulten and his team at UIUC to describe the HIV genome and its behavior in minute detail, through computations that require the simulations of more than 60 million atoms.  They just published a paper in PLOS Pathogens touting an early discovery–not (yet) the structure of the HIV virus, but that of a smaller virus, which could only be achieved through a 10 million atom, molecular dynamics simulation, inconceivable before Blue Waters. The team is using Blue Waters to investigate complex and fundamental molecular dynamics problems requiring atomic level simulations that are 10 to 100 times larger than those modeled to date, providing unprecedented insights.

Global Climate Change

Also featured at the dedication event, Cristiana Stan and James Kinter of George Mason University are using Blue Waters to engage in topical research on the role of clouds in modeling the global climate system during present conditions and in future climate change scenarios.

Earthquake Prediction

A team at the Southern California Earthquake Center, led by Thomas Jordan, is carrying out large-scale, high-resolution earthquake simulations that incorporate the entire Los Angeles basin, including all natural and human-built infrastructure, requiring orders of magnitude more computing power than studies done to date. Their work will provide better seismic hazard assessments and inform safer building codes:  Preparing for the Big One.

Flood Assessment, Drought Monitoring, and Resource Management

Engineering Professor Patrick Reed and his team from Penn State, Princeton and the Aerospace Corporation, are using Blue Waters to transform understanding and optimization of space-based Earth science satellite constellation designs.  “Blue Waters has fundamentally changed the scale and scope of the questions we can explore,” he said.  “Our hope is that the answers we discover will enhance flood assessment, drought monitoring, and the management of water resources in large river basins world-wide.”

Fundamental Properties of Nature

Robert Sugar, professor of physics at the University of California, Santa Barbara is using Blue Waters to more fully understand the fundamental laws of nature and to glean knowledge of the early development of the universe. ”Blue Waters packs a one-two punch,” said Sugar, “Blue Waters enables us to perform the most detailed and realistic simulations of sub-atomic particles and their interactions to date. Studies such as these are a global endeavor, and the large data sets produced on Blue Waters will be shared with researchers worldwide for further discoveries.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blue Ribbon and Harley Davidson motorcycles the agenda addresse Read more…

By Merle Giles

NSF Awards $10M to Extend Chameleon Cloud Testbed Project

September 19, 2017

The National Science Foundation has awarded a second phase, $10 million grant to the Chameleon cloud computing testbed project led by University of Chicago with partners at the Texas Advanced Computing Center (TACC), Ren Read more…

By John Russell

NERSC Simulations Shed Light on Fusion Reaction Turbulence

September 19, 2017

Understanding fusion reactions in detail – particularly plasma turbulence – is critical to the effort to bring fusion power to reality. Recent work including roughly 70 million hours of compute time at the National E Read more…

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conferen Read more…

By Tiffany Trader

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakt Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

Cubes, Culture, and a New Challenge: Trish Damkroger Talks about Life at Intel—and Why HPC Matters More Than Ever

September 13, 2017

Trish Damkroger wasn’t looking to change jobs when she attended SC15 in Austin, Texas. Capping a 15-year career within Department of Energy (DOE) laboratories, she was acting Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL). Her mission was to equip the lab’s scientists and research partners with resources that would advance their cutting-edge work... Read more…

By Jan Rowell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

MIT-IBM Watson AI Lab Targets Algorithms, AI Physics

September 7, 2017

Investment continues to flow into artificial intelligence research, especially in key areas such as AI algorithms that promise to move the technology from speci Read more…

By George Leopold

Need Data Science CyberInfrastructure? Check with RENCI’s xDCI Concierge

September 6, 2017

For about a year the Renaissance Computing Institute (RENCI) has been assembling best practices and open source components around data-driven scientific researc Read more…

By John Russell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

Leading Solution Providers

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

June 13, 2017

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This