Code Modernization: Bringing Codes Into the Parallel Age

By Doug Black

June 8, 2017

The ways that advanced computing performance depends on more – much more – than the processor take many forms. Regardless of Moore’s Law validity, it’s indisputable that other aspects of the computing ecosystem must keep pace with processor development if the system is to deliver the results everyone’s after.

One of those aspects is application code, some of which dates back to the late 1950s, when parallel computing was a futuristic computer science vision. Still in use today, those codes have been goosed, tickled and jolted for better performance, but they remain at their core what they’ve always been: serial applications.

We recently caught up with Joe Curley, senior director of Intel’s code modernization organization, who shared observations about Intel’s effort to optimize, or parallelize, widely used public codes for the latest generations of highly parallel x86 CPUs.

This includes work around applications used by manufacturers in product design, such as OpenFOAM for CFD; advanced MRI diagnostics programs used in the medical industry; seismic code for the oil and gas industry and applications used by banks and other financial services organizations.

Obviously, it’s in Intel’s self-interest to extend the life of the 40-year-old x86 architecture by maintaining an up-to-date code library. But organizations all over the world are hampered by the old code that, unoptimized, drags down the throughput of high performance clusters and impedes the work they do.

Intel’s Joe Curley

A recent development in code modernization, Curley said, has been incorporation of AI and machine learning techniques, which – when done right – can boost performance exponentially well beyond conventional, processor-focused code modernization work.

Much of Intel’s code modernization work comes out of its global network of Intel Parallel Computing Centers (IPCCs). Begun four years ago with six centers, the program has expanded to 72 and has worked on 120 codes in more than 21 domains.

The following are excerpts from our interview with Curley, some of which have been re-ordered for clarity.

Definition and Need

Code modernization can mean many things, from using a modern language to optimizing performance. We use code modernization in the literal sense: to become modern, using the newest information methods with technology.

The typical impact of a code modernization problem is giving someone the ability to take on a problem that was just too big to get at before. We’re trying to extract the maximum performance from an application and take full advantage of modern hardware. Other words have been used: optimization, parallelization and some others. But you can be parallel without being optimal, you can be optimal without being parallel. So we chose a slightly different term. It’s imperfect but it gets across the idea.

Modern, general purpose server processors have 18-22 processing cores with two threads and a vector unit built into it. They’re massively parallel processors. But by and large the applications we run on them have been derived from code that was generated in a sequential processing era. The fundamental problem that we work with is that many of the codes used in industry or in the enterprise today are derived from algorithms written anywhere from the 1950s to the 2000’s. And the microprocessors used at the time were primarily single core machines, so you have a very serial application.

In order to use a modern processor you could just take that serial application, create many copies of it and try to run it in parallel. And that’s been done for years. But the real power performance breakthroughs happen when someone steps back and asks: How can I start using all of these cores together computationally and in parallel?

What’s encompassed by code modernization

Our group does everything from training, academic engagement, building sample codes, working with ISVs in communities, both internally and externally. We focus efforts on open source communities and open source codes. The reason is that we’re not only trying to improve science, we’re also trying to improve the understanding of how to program in parallel and how to solve problems, so having the teaching example, or the example that a developer can look at, that’s incredibly important.

We’ve taken the output from the IPCCs, we’ve written it down, we’ve created case studies, we’ve created open source examples, graphs, charts – teaching examples – and then put it out through a series of textbooks. But importantly, all of the (output) can be used either by a software developer or an academic to teach people the state of the art.

For the IPCCs, the idea was to find really good problems that would most benefit from using the modern machine if only we could unlock performance of the code. Our work ranges practical academics to communities that generate community codes. In some cases they’re industrial and academic partnerships, some are in the oil and gas industry, working on refinement of core codes that will then go back in for use in seismic imaging. The idea is for these are to be real hands-on workshops between domain scientists, computer scientists, and Intel that have actual practical use within the life of our products.

So not only are we getting the first order of benefit if, say, an auto manufacturer was using OpenFOAM and got a result faster. That’s great, we’ve made it more efficient. But we’re also creating a pool of programmers and developers who’ll be building code for the next 20 years, making them more efficient as well.

Example: Medical/Life Sciences

One of our IPCC’s was with Princeton University, they were trying to get a better understanding of what was happening inside the human brain through imaging equipment while a patient was in the medical imaging apparatus. It’s a form of MRI called fMRI. The science on that is pretty well established. They knew how to take the data that was coming from the MRI, and they could compute on it and create a model of what’s going on inside the brain. But in 2012, when we started the project, they estimated it would take 44 years on their cluster to be able to calculate. It wasn’t a practical problem to solve.

So instead of using the serial method they were using they could start using it in parallel on more energy efficient, modern equipment. They came up with a couple of things. One: they parallelized their code and saw huge increases in performance. But they also looked at it algorithmically, they began to look at the practically of machine learning and AI, and how you could use that for science. Since these researchers happened to be from neural medicine centers they understood how the brain works. They were trying to use the same kind of cognition, or inference, that you have inside your brain algorithmically on the data coming from the medical imaging instrument.

They changed the algorithm, they parallelized their code, they put it all together and ended up with a 10,000X increase in performance. More practically, they were able to take something that would have taken 44 years down to a couple of minutes. They went from something requiring a supercomputing project at a national lab to something that could be done clinically inside a hospital.

That really captures what you can try to do inside a code modernization project. If you can challenge your algorithms, you can look at the best ways to compute, you can look at the parallelization, you can look at energy efficiency, and you can achieve massive increases in performance.

So now, how that hospital treats the neurology of the brain is different because of the advances offered by code modernization. Of course the application of that goes out into the medical community, and you can start looking at fMRI in more clinical environments.

Example: Industrial Design

One of the community applications, OpenFOAM, is used heavily in automobile manufacturing. We’ve worked with a number of fellow researchers to deliver breakthroughs in power and performance by 2 or 3x, which, across an application the size and magnitude of OpenFOAM is really substantial.

It also creates a lighthouse example for commercial ISVs of what can be done. This clearly showed that for computational fluid dynamics at scale, entirely new methods can be applied to the problem. We’ve had a lot of interest and pick-up from commercial ISVs on some of the work being done using some of the community codes.

Here’s the thing we want to get at: What’s the real value in computing a model faster? Most people tend to think of code modernization simply as making a simulation run faster. But one of the things we’ve done is develop software that can help you better visualize your physical design.

Audi, for example, has worked with Autodesk as an ISV partner, they’ve developed modern Raytracer (rendering engine) examples of things we work on inside our code modernization group. We have another group that works on visualization and how to take your images and make them look lifelike. Autodesk has come up with clever ways of doing that and building that into their product line, and then allowing Audi to remove physical prototypes both for assembly as well as for interior and exterior design from their process.

Think of someone building a clay model of a car and taking it to a wind tunnel, or building a fit-and-finish model of a car, to see how the interior design will look and to see if it’s pleasing to the customer. They’ve removed all that modeling. It’s all being done digitally, not only the digital design and simulation but also the digital prototyping, and then visualizing it through modern software on a departmental-sized computer.

The impact of that, according to Audi when they spoke at ISC, is that it removed seven months from their process for the fit-and-finish prototypes and six for the physical prototypes. If you can shave that much time out of your process you can gain major competitive advantage from HPC.

It’s all made possible by new highly parallel codes and interestingly, all the visualization is done entirely on general-purpose CPUs.

Example: Financial Services

For financial services companies, with code modernization there’s the opportunity to use the same cluster, that you’d use for the rest of your bank’s operations, for the most high performance tasks. Whether it’s options valuation or risk management or some of the tasks you use HPC for, we can do that on general-purpose Xeon CPUs.

In banking one of the problem is that most of those codes are the crown jewels of the banks. So we can’t talk about them. In many cases we don’t even see them. But we can work on the STAC-A2 benchmark – it’s a consortium of banks that’s built a suite of benchmarks for a variety of problems that operate sufficiently like what they do to get an idea of how fast they can run their software, and the STAC-A2 results get published.

On both our general-purpose Xeon and Xeon Phi CPUs through code modernization we’ve set world records for the STAC-A2 repeatedly. It’s an arms race. But we’ve done it multiple times with general purpose code.

That allows the bank to take that code as an exemplar, and apply it to their own special algorithms and their own financial science, and get the most performance out of their general-purpose infrastructure.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputing Helps Explain the Milky Way’s Shape

September 30, 2022

If you look at the Milky Way from “above,” it almost looks like a cat’s eye: a circle of spiral arms with an oval “iris” in the middle. That iris — a starry bar that connects the spiral arms — has two stran Read more…

Top Supercomputers to Shake Up Earthquake Modeling

September 29, 2022

Two DOE-funded projects — and a bunch of top supercomputers — are converging to improve our understanding of earthquakes and enable the construction of more earthquake-resilient buildings and infrastructure. The firs Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's annual developer gala last held in 2016. The chipmaker cut t Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely mimic the behavior of the human brain through the use of co Read more…

DOE Announces $42M ‘COOLERCHIPS’ Datacenter Cooling Program

September 28, 2022

With massive machines like Frontier guzzling tens of megawatts of power to operate, datacenters’ energy use is of increasing concern for supercomputer operations – and particularly for the U.S. Department of Energy ( Read more…

AWS Solution Channel

Shutterstock 1818499862

Rearchitecting AWS Batch managed services to leverage AWS Fargate

AWS service teams continuously improve the underlying infrastructure and operations of managed services, and AWS Batch is no exception. The AWS Batch team recently moved most of their job scheduler fleet to a serverless infrastructure model leveraging AWS Fargate. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Do You Believe in Science? Take the HPC Covid Safety Pledge

September 28, 2022

ISC 2022 was back in person, and the celebration was on. Frontier had been named the first exascale supercomputer on the Top500 list, and workshops, poster sessions, paper presentations, receptions, and booth meetings we Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's ann Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely Read more…

HPE to Build 100+ Petaflops Shaheen III Supercomputer

September 27, 2022

The King Abdullah University of Science and Technology (KAUST) in Saudi Arabia has announced that HPE has won the bid to build the Shaheen III supercomputer. Sh Read more…

Intel’s New Programmable Chips Next Year to Replace Aging Products

September 27, 2022

Intel shared its latest roadmap of programmable chips, and doesn't want to dig itself into a hole by following AMD's strategy in the area.  "We're thankfully not matching their strategy," said Shannon Poulin, corporate vice president for the datacenter and AI group at Intel, in response to a question posed by HPCwire during a press briefing. The updated roadmap pieces together Intel's strategy for FPGAs... Read more…

Intel Ships Sapphire Rapids – to Its Cloud

September 27, 2022

Intel has had trouble getting its chips in the hands of customers on time, but is providing the next best thing – to try out those chips in the cloud. Delayed chips such as Sapphire Rapids server processors and Habana Gaudi 2 AI chip will be available on a platform called the Intel Developer Cloud, which was announced at the Intel Innovation event being held in San Jose, California. Read more…

More Details on ‘Half-Exaflop’ Horizon System, LCCF Emerge

September 26, 2022

Since 2017, plans for the Leadership-Class Computing Facility (LCCF) have been underway. Slated for full operation somewhere around 2026, the LCCF’s scope ext Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

Leading Solution Providers

Contributors

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire