What Drives Investment in the Middle of HPC?

By Nicole Hemsoth

May 15, 2014

When it comes to covering supercomputers, the most attention falls on the front runners on the Top 500. However, a closer look at the tail-end of the rankings reveals some rather interesting use cases—not to mention courses of development, system design, and user-driven requirements for future build out.

The University of Florida is home to one expanding system, which rests just at the cutoff of the top supercomputing rankings at #493. The university’s Director of Research Computing, Dr. Erik Deumens, tells us the real purpose of the system is to support as many diverse applications as possible with as few queue barriers as possible. While this is a familiar claim no matter what size the site may be, the team has gone through great lengths to ensure that current developments to make their flagship system, called HiPerGator, are fed solely by user demand.

It might not be surprising then, at least to those in research computing, that the demand for the latest generation of processors with a 10 or 20% performance jump is far less critical than simply being able to onboard an application without a long queue and run in a reasonable amount of time. But meeting that need requires some serious thought about capacity, scheduling, and meeting diverse application requirements. In other words, for those tuning in for the ultra-high performance computing story, this isn’t the most exciting tale, but there are some important lessons to be learned from his team’s experiences working with a broad range of applications and over 600 users to find out what really creates a fully functional system—all based on what amounts to an “economic” decision-making process for their HPC investments.

In essence, the economics of demand determine the spending decisions at the University of Florida and several other similar centers. This isn’t so different than the large scientific computing sites in theory, except user requests trump all—including power or other considerations. “If the users are asking for the latest novel technology but it’s not the most efficient, we aren’t going to deny them what they need for their research,” says Deumens. In the case of HiPerGator, the university funds the system and staffing so that that individual researchers can use their grants to buy a desired number of cores for their jobs. Flexibility is built into the “purchase” as users can go past 10x what they requested as needed to avoid added complexity in terms of scheduling and managing their jobs. Deumens and team use Moab and Torque to handle the many requests, in addition to offering the capability for more sophisticated users to fine-tune their requests according to the mix of available architectures. The system tends to run under its maximum capacity at all times so that there are not long wait times since the one thing that researchers want—timely (if not immediate) access to computational resources that run in the anticipated timeframe. And essentially, says Deumens, everyone is happy.

For some background, the HiPerGator system in its original incarnation (announced last year) offered up over 16,000 AMD “Abu Dhabi” cores with Dell underpinnings, a 2.88 petabyte Terascala-built Lustre-based system and Mellanox’s Infiniband throughout. They’ve since added an additional round of cores from pre-existing systems (both Intel and AMD), bringing their HPC core count to over 21,000. There is a set of nodes that provide a total of 80 GPUs in addition and more planned for the future—in addition to the possibility of Xeon Phi cores as well as they plan their build-out to be completed by this time next year. “There are always exceptions but most of our users don’t care what processor generation they’re running on. They just want to get their work done.” And all the while, his team keeps very careful track of what the users are looking for in terms of new or existing hardware and they use this information to tally what they ask vendors for during each year’s hardware and software buying cycles.

To put this in context, when the original HiPerGator emerged, there were a total of 8 GPUs available to researchers, which they bought simply to support the mission of a semester-long class that required them for special projects. However, once researchers at the university knew they were available, they began experimenting with porting codes, including AMBER on the molecular dynamics front. These development activities led the application teams to desire full production runs, which required more GPUs. And so their unexpected influx of GPU nodes occurred organically. This is the exact type of case that will feed how the next generation of their system develops—actual user interest means more “purchases” from researchers, but to keep their one main goal of providing solid resources without the wait times, they’ll make sure to supply ample nodes with whatever the research community seems to desire.

Deumens and team are taking those desires on the road in the next months. They’re currently in the midst of looking for vendors to help them supply the needs of HiPerGator 2, which again, is slated for this time next year. He gave us a sense of what works—and doesn’t—when it comes to supporting research at a university that wants to become a top tier research center based on its HPC capabilities.

First, he says, there are some successes in terms of their approach to scheduling. It used to be a manual process, but has been eased through their Moab and Torque engines. Further, he highlighted the increasing role of Galaxy, the open source scientific gateway project for creating, tracking and sharing scientific workflows that has taken off in the biosciences community. He also says that for a research center their size, the more cores they have available, the better. While some of their users can take advantage of their Infiniband fabric and run MPI or SMP jobs, in the end it’s all about getting up and running.

The other element that has worked for research teams at the University of Florida is having a stable, strong storage system like their Terascala solution, which is capable of handling massive data flows—an increasing problem for all scientific computing sites as data demands scramble to meet the computing capacity that is available.

What’s missing from their system is something that will be difficult for any of the vendors who supply the next iteration of the machine next year. And it’s something we’ve heard from much larger centers. There is a dramatic need to make a “super app” of sorts that turns a researcher’s desktop machine into a direct link to the supercomputing site, handling scheduling, data movement, and output in a seamless, portable interface. While this seems like it might be easy in this era of web-based interfaces for everything, it’s what’s really missing for centers designed around simply serving scientific users—and something that he and his team will continue to look toward in the coming years.

It was interesting to listen to the difference in concerns about power, performance and ease of access from the perspective of a much smaller HPC site than the top ten system managers we so often talk to. Power is always a concern, of course, but at smaller scale when exascale is something for the DoE and other government labs internationally to worry about, the problems of real-world daily operations boil down to one simple factor—make a supercomputer easy to use, quick to load into, and predictable in its time to result. A humbling reminder after so many conversations about eeking performance out of the hottest processors, largest systems and biggest power footprints on the planet.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose primary use case is to support high IOPS rates to/from a scra Read more…

By John Russell

Lenovo to Debut ‘Neptune’ Cooling Technologies at ISC

June 19, 2018

Lenovo today announced a set of cooling technologies, dubbed Neptune, that include direct to node (DTN) warm water cooling, rear door heat exchanger (RDHX), and hybrid solutions that combine air and liquid cooling. Lenov Read more…

By John Russell

World Cup is Lame Compared to This Competition

June 18, 2018

So you think World Cup soccer is a big deal? While I’m sure it’s very compelling to watch a bunch of athletes kick a ball around, World Cup misses the boat because it doesn’t include teams putting together their ow Read more…

By Dan Olds

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Banks Boost Infrastructure to Tackle GDPR

As banks become more digital and data-driven, their IT managers are challenged with fast growing data volumes and lines-of-businesses’ (LoBs’) seemingly limitless appetite for analytics. Read more…

IBM Demonstrates Deep Neural Network Training with Analog Memory Devices

June 18, 2018

From smarter, more personalized apps to seemingly-ubiquitous Google Assistant and Alexa devices, AI adoption is showing no signs of slowing down – and yet, the hardware used for AI is far from perfect. Currently, GPUs Read more…

By Oliver Peckham

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose p Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

Xiaoxiang Zhu Receives the 2018 PRACE Ada Lovelace Award for HPC

June 13, 2018

Xiaoxiang Zhu, who works for the German Aerospace Center (DLR) and Technical University of Munich (TUM), was awarded the 2018 PRACE Ada Lovelace Award for HPC for her outstanding contributions in the field of high performance computing (HPC) in Europe. Read more…

By Elizabeth Leake

U.S Considering Launch of National Quantum Initiative

June 11, 2018

Sometime this month the U.S. House Science Committee will introduce legislation to launch a 10-year National Quantum Initiative, according to a recent report by Read more…

By John Russell

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Exascale USA – Continuing to Move Forward

June 6, 2018

The end of May 2018, saw several important events that continue to advance the Department of Energy’s (DOE) Exascale Computing Initiative (ECI) for the United Read more…

By Alex R. Larzelere

Exascale for the Rest of Us: Exaflops Systems Capable for Industry

June 6, 2018

Enterprise advanced scale computing – or HPC in the enterprise – is an entity unto itself, situated between (and with characteristics of) conventional enter Read more…

By Doug Black

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Google Charts Two-Dimensional Quantum Course

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for Google’s John Martinis. At a presentation last week at the HPC User Forum in Tucson, Martinis, one of the world's foremost experts in quantum computing, emphasized... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This