ISC Workshop Tackles the Co-development Challenge

By John Russell

July 12, 2016

The long percolating discussion over ‘co-development’ and how best it should be undertaken has gained new urgency in the race towards exascale computing. At a workshop held at ISC2016 last month – Form Follows Function: Do algorithms and applications challenge or drag behind the hardware evolution? – several distinguished panelists offered varying viewpoints. Yesterday, session organizer Tobias Weinzierl posted a summary synopsis of the workshop discussion on arXiv.org.

Weinzierl (Durham University) and co-organizer Michael Bader (Technische Universität München) are active participants in the ExaHyPE project (An Exascale Hyperbolic PDE (partial differential equation) Engine [1], funded by EU’s Horizon 2020 program). ExaHyPE focuses on the development of new mathematical and algorithmic approaches to exascale systems – initially for simulations in geophysics and astrophysics. During the four-year project, researchers from institutions in Germany, Italy, United Kingdom, and Russia will develop novel software for performing simulations on exascale supercomputers.

Seven European supercomputing projects were invited to the workshop to “share their views on the interplay of hardware and software evolution,” giving the workshop a distinctly European flavor. Among the speakers and organizations represented were:

DruckWeinzierl wrote that technology roadmaps are dominated by predictions on hardware. “At the same time, hardware-software co-design is a frequently cited phrase. It suggests that software development can have an impact on the hardware evolution. It can actively shape. The workshop members clarified in their talks to which degree this assumption holds in the context of their projects, what the interaction of hardware and software development looks like and weather the interplay is positive and should be fostered or manipulative and slows down scientific progress?”

He also noted pointedly, “As the workshop invited European projects, this document has a strong European flavour. This is important to keep in mind given that we discuss aspects of co-design—in a business that is dominated by US vendors. Furthermore, almost all invited projects emphasize aspects of simulation software development and integration into classic simulation workflows. We do not really discuss co-design in a co-design setting: all statements on co-design are made from a scientific computing’s software point of view. Last but not least, some statements are on purpose pointed.”

Here’s an excerpt from Weinzierl’s summary report (the report itself is brief and best read in full (link below)):

Running in circles: Does co-design happen (outside co-design projects)?

  • “Any discussion on hardware-software/software-hardware influence has to start from a clarification whether such a cycle does exist and what it looks like. The workshop opened with a presentation by Jack Dongarra who sketched such a cycle. LINPACK [3] with its emphasis on vectors fits to a particular type of machine. It was written at a time when it had been important to tackle the thorny fact that floating point operations are expensive. LAPACK [4] anticipates the advent of caches where keeping the floating point units busy gains importance. ScaLAPACK’s [5] design was kicked off by multi-node machines with MPI, while the dusk of BSP triggers the development of Magma [6] and Plasma [7]. The latter are subject of study in the NLAFET project [8]. Mark Parsons gave another example as he outlined how the availability of 3D XPoint non-volatile memory [9] laid the foundations of the NEXTGenIO project [10] 2 studying how to use additional memory layers between main memory and hard disk.
  • Jack Dongarra
    Jack Dongarra

    “While it is easy to follow how hardware development triggers new algorithmic work—our own ExaHyPE [1] project hypothesising that hardware will suffer from severe performance fluctuations is an example for this, too—Jack pointed out that the (Top 500) benchmarks in turn grew downstreamingly into a directing role for the hardware evolution, as they make vendors tune their machines towards these benchmarks; though this has never been the intention behind them in the first place as he emphasised. Other examples for the influence back are the increasing IO demands of today’s software as sketched before, or GPGPU modifications as Peter Messmer illustrated at hands of the Escape project [11]: atomics and double precision would not have made it into GPUs that fast if there had not been a demand of these features from the scientific computing side. After all, machines are procured because of scientific software needs. So while we see software written from scratch around every ten years because of transformative hardware developments, in-between software continuously influences the hardware evolution; mainly by acting as benchmarks or as they escalate bottlenecks.”

Weinzierl wrote, “Most workshop participants were skeptical whether the cycle of influence is a good one the way we experience it right now: It orbits around weaknesses and demands. It is backward looking. Mark articulated that he is worried that the evolution even does not take the well-known Amdahl numbers into account [13]: “I believe strongly in co-design but it happens extremely rarely”.

As noted early, Weinzierl’s summary report is short and best read in full. Here’s a link to the report: http://arxiv.org/pdf/1607.02835v1.pdf

References

[1] www.exahype.eu
[2] www.exascale.org/mediawiki/images/2/20/IESP-roadmap.pdf
[3] www.netlib.org/linpack
[4] www.netlib.org/lapack
[5] www.netlib.org/scalapack
[6] icl.cs.utk.edu/magma
[7] icl.cs.utk.edu/plasma
[8] www.nlafet.eu
[9] www.micron.com/about/emerging-technologies/3d-xpoint-technology
[10] www.nextgenio.eu
[11] www.hpc-escape.eu
[12] www.isc-hpc.com
[13] www.microsoft.com/en-us/research/publication/rules-of-thumb-in-data-engineering
[14] exaflow-project.eu

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UT Dallas Grows HPC Storage Footprint for Animation and Game Development

October 28, 2020

Computer-generated animation and video game development are extraordinarily computationally intensive fields, with studios often requiring large server farms with hundreds of terabytes – or even petabytes – of storag Read more…

By Staff report

Frame by Frame, Supercomputing Reveals the Forms of the Coronavirus

October 27, 2020

From the start of the pandemic, supercomputing research has been targeting one particular protein of the coronavirus: the notorious “S” or “spike” protein, which allows the virus to pry its way into human cells a Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. The acquisition helps AMD keep pace during a time of consolida Read more…

By John Russell

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chip maker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the Europe Read more…

By George Leopold

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a reference collection of open-source HPC software components and bes Read more…

By John Russell

AWS Solution Channel

Rapid Chip Design in the Cloud

Time-to-market and engineering efficiency are the most critical and expensive metrics for a chip design company. With this in mind, the team at Annapurna Labs selected Altair AcceleratorRead more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. Th Read more…

By John Russell

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a referen Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This