Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

May 8, 2014

Engineering Codes to Meet the Exascale Era

Nicole Hemsoth

As we look to the future applications for exascale-class computers, the grand science challenges are often first on the list. From modeling the climate to the underpinnings of the universe, the problems most often associated with exascale computing are epic in scope. However, the range of complex, multidisciplinary engineering problems that can be solved with multiphysics codes are countless, although this is one area where scalability has had limitations.

We recently reported about some recent progress toward scaling engineering codes when we spoke with a researcher whose team was able to scale the commercial explicit finite element code, LS-DYNA, to 15,000 cores on Blue Waters. That was quite a remarkable achievement for a commercial code, although as one might imagine, most companies deploying LS-DYNA most likely wouldn’t be tapping into that many in-house cores. One side effect of the scalability effort, is that one can show how efficiencies in highly parallelized code that are proven at the 15,000 core-level mean big things as more compute finds its way into everyday systems—and those efficiencies translate just as well for a company making the first leap from 100 cores to 1000, for instance.

The problem with commercial engineering codes, especially those that tack on more than just one simulation element as is the case with complex multiphysics, is that many began their lives a couple decades ago (or more) as sequential code that has been hammered at length to run efficiently in parallel. This is according to Mariano Vazquez from the Barcelona Supercomputing Center, who is one of two architects of a multiphysics code developed at BSC called Alya, which just scaled to 100,000 cores on Blue Waters—a rather groundbreaking achievement.

The Alya multiphysics code was built from the ground up to run efficiently in parallel, solve many different problems, and maintain programmatic ease. Vazquez explained that the difference in scalability and efficient in parallel on massive machines is a matter of having code that’s built with this purpose in mind. He compares this to commercial engineering codes, pointing to Ansys as a good example, where the codes were sequential to begin with, then were added to as other codes were acquired for different physics problems, which creates the need for major investments in melding and parallelizing. This limits the ability for codes to scale to massive core counts—although he notes that this is not a major barrier for Ansys or other engineering simulation vendors since most shops don’t even run on 1000 cores, let alone 100,000.

“Our goal is not to compete with these companies,” Vazquez explained, “we’re a supercomputing center and we are solving different, more complex problems…those that there are not even physical models for or that are so involved that a commercial code couldn’t work with.” Further, he adds that the real value of what they’ve demonstrated with their scaling feat is that if it can run efficiently in parallel on so many cores on Blue Waters, it will run very efficiently on a typical smaller cluster, as one might find at universities or commercial companies.

Another side benefit, of course, is that it shows that engineering codes for mechanical problems have a definite place on the exascale application roadmap. Vazquez and fellow researchers were able to demonstrate success with multiphysics problems in incompressible fluid mechanics, combustion and thermal flow, solid mechanics and other commonly-used subsets of engineering simulation problems, running with meshes numbering in the billions of elements.

We’ll stay tuned in with further developments as the BSC team further refines its approach to bringing engineering codes into the pre-exascale era and beyond. For now, they’ll be focusing on some key areas that represent scalability hurdles, including handling post-processing, monitoring scalability for a new class of even larger problems, the use of accelerators (they’ve been testing GPUs and Xeon Phi). The team plans to report on their progress on these areas in the near future.

Share This