Consolidating HPC’s Gains

By Gary Johnson

August 13, 2013

Despite phenomenal progress in HPC over a sustained period of decades, a few issues limiting its effectiveness and acceptance remain.  Prominent among these are the repeatability, transportability, and openness of HPC applications.  As we prepare to move HPC to the exascale level, we should take the time and effort to consolidate HPC’s gains and deal with these residual issues from the early days of computational science.  Only then will we be ready to reap the benefits of more powerful HPC tools.

HPC Tools

Nearly fifty years ago, in 1964, the first computer generally acknowledged as a supercomputer – the CDC 6600 – was introduced.  At that time, there was no Linpack Benchmark or Top500 List but, by the measures in use then, it was able to sustain a performance level of about 500 Kiloflops.

In 1970, ARPAnet, the progenitor of the Internet came along.  A few years later, in 1973, Ethernet was invented.  In 1985, NSFnet was created and in the early 1990s it morphed into the Internet.  In 1990 the World Wide Web was born and in 1993 it was made visual by the release of the Mosaic web browser.  Also in 1993, the Top500 List was introduced and its top computer was a Thinking Machines CM-5, clocked at just under 60 Gigaflops.

In summary, HPC has existed for at least half a century and, in terms of HPC tools, we’ve had fairly capable supercomputers and networking for about 20 years.

HPC Applications

The concept of computational science came to public light no later than 1989, when our late friend and colleague, Ken Wilson, published his well-known Grand Challenges to Computational Science paper (unfortunately, it’s locked away behind a paywall).  So, both the HPC tools and the computational science concept for HPC applications gelled into something pretty close to their contemporary form a couple of decades ago. 

Originally, computational science was met with a fair amount of skepticism.  It was seen by some as just a collection of stunts, producing little more than pretty pictures – not the real stuff of science.  It was seen as lacking the rigor necessary to be on par with theory and experiment.  Computational science results were often criticized as one-off demos of unproven concepts. 

So, how effectively and convincingly have we been using HPC?

Repeatability, Transportability, Openness

Both theory and experiment share a few key attributes:

Repeatability (Recomputability)

 A result obtained once can be repeated arbitrarily many times, given the same assumptions (for a theory) or conditions (for an experiment).

Transportability (Reuse)

Results are not dependent on any particular theorist, experimentalist or specific apparatus.  They are transportable to other people and places – transcending any particular instance.

Openness

Results are open.  Theorists publish their theories and the corresponding proofs (if possible) or conjectures.  Experimentalists describe the conditions of their experiments and the details of their equipment and procedures.  These steps are taken to ensure the credibility of results by enabling their repeatability and transportability. 

HPC applications, as science, should also share these attributes – in order to rise above the early criticisms of computational science, and to be effective and convincing.

Current Status

Twenty years into the “modern era” of HPC applications, how are we doing?  Clearly, we’ve made our applications bigger and more complex.  Through improvements in the speed of both algorithms and hardware, our applications execute faster.  The concepts of Verification and Validation (V&V) and Uncertainty Quantification (UQ) for scientific codes have taken root – but perhaps not yet fully blossomed in general HPC practice. 

However, despite the laudable efforts of many of our HPC colleagues to solidify the standing of our field, significant issues with repeatability, transportability, and openness remain.  Here are a few recent developments:

Repeatability (Recomputability)

Ian Gent, Professor of Computer Science at the University of St Andrews, has recently published something he calls The Recomputation Manifesto.  It is described in a post of his at the Software Sustainability Institute.  The Manifesto contains six points (emphasis mine):

  1. Computational experiments should be recomputable for all time
  2. Recomputation of recomputable experiments should be very easy
  3. It should be easier to make experiments recomputable than not to
  4. Tools and repositories can help recomputation become standard
  5. The only way to ensure recomputability is to provide virtual machines
  6. Runtime performance is a secondary issue

The Manifesto is based on Gent’s views that:

The current state of experimental reproducibility in computer science is lamentable. The result is inevitable: experimental results enter the literature which are just wrong. I don’t mean that the results don’t generalise. I mean that an algorithm which was claimed to do something just does not do that thing: for example, if the original implementation was bugged and was in fact a different algorithm. I suspect this problem is common, and I know for certain that it has happened. Here’s an example from my own research area, discovered by my friend and tenacious pursuer of replication Patrick Prosser.

The full text of the Manifesto is available on arXiv.  Suffice it to say that Professor Gent’s concerns are well founded and extend beyond computer science to include HPC applications. 

Transportability (Reuse)

A group of investigators from Korea and the US have recently published a paper entitled An Evaluation of the Software System Dependency of a Global Atmospheric Model.  The abstract reads as follows (emphasis mine):

This study presents the dependency of the simulation results from a global atmospheric numerical model on machines with different hardware and software systems. The global model program (GMP) of the Global/Regional Integrated Model system (GRIMs) is tested on 10 different computer systems having different central processing unit (CPU) architectures or compilers. There exist differences in the results for different compilers, parallel libraries, and optimization levels, primarily due to the treatment of rounding errors by the different software systems. The system dependency, which is the standard deviation of the 500-hPa geopotential height averaged over the globe, increases with time. However, its fractional tendency, which is the change of the standard deviation relative to the value itself, remains nearly zero with time. In a seasonal prediction framework, the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.

The full paper is behind an American Meteorological Society paywall.  Based on my interpretation of the abstract, transportability (or reuse) is a non-trivial issue for this HPC application.  My guess is that this is not an isolated case.

Openness

A group of nine astrophysicists recently published a paper in arXiv entitled Practices in source code sharing in astrophysics.  In it, they write (emphasis mine):

While software and algorithms have become increasingly important in astronomy, the majority of authors who publish computational astronomy research do not share the source code they develop, making it difficult to replicate and reuse the work. In this paper we discuss the importance of sharing scientific source code with the entire astrophysics community, and propose that journals require authors to make their code publicly available when a paper is published. That is, we suggest that a paper that involves a computer program not be accepted for publication unless the source code becomes publicly available. The adoption of such a policy by editors, editorial boards, and reviewers will improve the ability to replicate scientific results, and will also make the computational astronomy methods more available to other researchers who wish to apply them to their data.

So, openness clearly also remains an issue for HPC applications. 

Note further that it’s not just the codes and their related parameters that should be publicly available – but also the scientific publications reporting on them.  If you’ve been keeping track, you’ve noted that two papers mentioned in this article are behind paywalls – Ken Wilson’s seminal paper on Grand Challenges to Computational Science (24 years later!) and the recent one on the Global Atmospheric Model (despite its obvious public policy implications).  The good news is that places like arXiv exist and the other publications mentioned here are out in the open.

Consolidating HPC’s Gains

HPC has come a long way.  Our tools have improved greatly.  For example, today’s fastest machine, China’s Tianhe-2, has been clocked at just under 34 Petaflops.  So roughly speaking, HPC performance has improved by a factor of about 600,000 in the past 20 years (and 68 billion in the past 50 years).  Current plans are to have exascale computers in place by the beginning of the next decade.

The rapid pace of improvement in HPC tools and their increasingly broader adoption by industry puts a lot of pressure on HPC applications – and on the financial resources available to support the whole HPC enterprise.  Certainly, HPC applications have grown in scale and become more complex and inclusive of more physical phenomena.  However, arguably, most petascale applications are still done in the old “hero mode” from the early days of computational science.  Most practitioners compute at the terascale – not the petascale – and only limited resources have been made available to help them catch up before the bar is raised to exascale.

So, while we’re working toward exascale HPC tools, perhaps we should consolidate the HPC applications gains we’ve made thus far – so that we’ll be ready to embrace exascale and exploit it fully.  Even if financial resources are scarce, this should be a high priority. 

In addition to bringing more HPC applications – and people – up to the petascale level, we should address the lingering issues of repeatability, transportability, openness discussed above.  If forced to pick one of these three to focus on, openness is probably the key.

If we publish openly and release the related source codes, repeatability and transportability should be solvable problems.  The venues for open publication already exist and are being used by some communities.  To complete this part of openness, just don’t allow your publications to be placed behind paywalls.  There is no good reason that scientific work (probably funded by public money) should be behind paywalls.  Once that bullet has been bitten, source codes must inevitably follow.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Dell EMC will Build OzStar – Swinburne’s New Supercomputer to Study Gravity

August 16, 2017

Dell EMC announced yesterday it is building a new supercomputer – the OzStar – for the Swinburne University of Technology (Australia) in support the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav) Read more…

By John Russell

Microsoft Bolsters Azure With Cloud HPC Deal

August 15, 2017

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capabilities to the cloud. Terms of the acquisition were not Read more…

By George Leopold

HPE Ships Supercomputer to Space Station, Final Destination Mars

August 14, 2017

With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system into space aboard the SpaceX Dragon Spacecraft to explore if Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Leveraging Deep Learning for Fraud Detection

Advancements in computing technologies and the expanding use of e-commerce platforms have dramatically increased the risk of fraud for financial services companies and their customers. Read more…

AMD EPYC Video Takes Aim at Intel’s Broadwell

August 14, 2017

Let the benchmarking begin. Last week, AMD posted a YouTube video in which one of its EPYC-based systems outperformed a ‘comparable’ Intel Broadwell-based system on the STREAM benchmark and on a test case running ANS Read more…

By John Russell

Microsoft Bolsters Azure With Cloud HPC Deal

August 15, 2017

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capa Read more…

By George Leopold

HPE Ships Supercomputer to Space Station, Final Destination Mars

August 14, 2017

With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system i Read more…

By Tiffany Trader

AMD EPYC Video Takes Aim at Intel’s Broadwell

August 14, 2017

Let the benchmarking begin. Last week, AMD posted a YouTube video in which one of its EPYC-based systems outperformed a ‘comparable’ Intel Broadwell-based s Read more…

By John Russell

Deep Learning Thrives in Cancer Moonshot

August 8, 2017

The U.S. War on Cancer, certainly a worthy cause, is a collection of programs stretching back more than 40 years and abiding under many banners. The latest is t Read more…

By John Russell

IBM Raises the Bar for Distributed Deep Learning

August 8, 2017

IBM is announcing today an enhancement to its PowerAI software platform aimed at facilitating the practical scaling of AI models on today’s fastest GPUs. Scal Read more…

By Tiffany Trader

IBM Storage Breakthrough Paves Way for 330TB Tape Cartridges

August 3, 2017

IBM announced yesterday a new record for magnetic tape storage that it says will keep tape storage density on a Moore's law-like path far into the next decade. Read more…

By Tiffany Trader

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

August 1, 2017

With its Radeon “Vega” Instinct datacenter GPUs and EPYC “Naples” server chips entering the market this summer, AMD has positioned itself for a two-head Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Leading Solution Providers

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This