Avoiding Application Porting Pitfalls

By Tim Leite

September 28, 2007

Historically, compute clusters have emerged as a less expensive, more practical option for harnessing high performance computing power using systems that were already available in-house. In highly technical settings, such as academia and national laboratories, many researchers and IT managers could not afford to purchase supercomputers so they networked systems together to creatively solve complex computational problems.

The original Beowulf cluster built at NASA in the late 1990s is the epitome of this paradigm shift. Over the last decade, with the evolution of programming standards, refinements in packaging, and improvements in interconnect technology, compute clusters are becoming increasingly attractive to commercial companies. Commercial organizations are choosing cluster environments, not just for financial reasons, but also for their computational scalability. When companies need more power and reach, they simply add another server to their cluster. As a result, clusters have become more appealing to certain high-growth commercial sectors, such as financial services, as a viable alternative for high performance computing.

While there are many companies interested in taking advantage of clusters, many have not yet made the leap. One of the primary reasons for not moving to a cluster environment is that many of these organizations have legacy applications that run well on a traditional server. The cost to migrate the application to a cluster is too high. To further complicate the situation, the application may be a mission-critical asset and to attempt to migrate it to a new system is considered too risky.

Many programs that continue to run on VMS-based platforms fall into this category. The system is reliable and the application executes properly. So despite being considered by many as outdated technology, the organization relying on the VMS-based program would have no short-term migration plan.

However, there are factors driving change. From a performance perspective, compute clusters are becoming more powerful, so organizations are sacrificing performance by staying with outdated platforms. From a personnel perspective, keeping applications on older platforms is becoming riskier, as there are fewer and fewer trained experts in these older technology areas.

When applications are initially developed, there are steps that can be taken to make a future migration less painful and risky. For existing applications that require migration or porting, such as ones being moved to a cluster environment, there are a number of potential porting issues to address along the way. The following provides a brief overview of those issues and how companies can avoid them before they get started.

Preserving Computational Accuracy While Porting Proprietary Applications

While using standard software solutions in a compute cluster helps companies ensure application compatibility with minimum conversion issues, the truth is that many companies have a number of custom, proprietary applications that need to be ported. When porting proprietary applications, the real challenge is to ensure that computational accuracy stays intact when the process is complete.

One of the most reliable sources of computational integrity is commercial numerical libraries. Commercial libraries utilize the numerical representation of the architecture for computational consistency. For example, the convergence criteria for a nonlinear least squares optimization algorithm may be based on a system-specific parameter, such as the largest relative floating point spacing versus a hard-coded value. The hard-coded value may work fine on the original development system, but when porting that algorithm to a system with a different floating point representation, there is a high likelihood that the algorithm will not perform as expected. Relying on the commercial version of the algorithm will avoid these potential problems and significantly reduce the amount of debugging time needed when porting applications to a new environment. 

If proprietary applications are already developed, companies can retrofit them with commercial libraries before porting to a new platform. If the proprietary application is “home-grown,” an organization may consider substituting algorithms from a commercial library for algorithms that were developed in-house or obtained as open source. There are a variety of reasons why the home-grown application may not perform reliably on a new platform. The algorithm from a commercial library is designed to execute consistently across all supported platforms.

Optimizing Performance and Scalability without Sacrificing Portability

Hardware vendors offer companies more alternatives than ever before for setting up cluster environments. Besides the hardware, they also recommend applicable software as well as services. The hardware vendors have spent considerable effort to help customers optimize applications on their particular platform. These efforts by vendors to optimize the performance on the supported platform are truly a result of the evolution of high performance computing itself.

In its early days, the mechanisms to make high performance computing work were in the public domain. A good example is the Message Passing Interface (MPI), the defacto standard for many years for communication between processes in a compute cluster environment. As MPI has evolved, the major hardware vendors with compute cluster offerings have developed optimized message passing interfaces for their own platforms.

While it’s recommended to use these vendor optimized mechanisms for high performance applications, it will add complexity. When porting applications between hardware platforms, if the algorithms used in application development are home grown, they need to be thoroughly tested to ensure they perform reliably. Again, using algorithms from a commercial library designed to execute consistently across multiple supported platforms can reduce porting risks.

Porting Proprietary Applications in Different Languages

Porting can be problematic if an organization has developed applications in a variety of different languages such as Fortran, C or Java. Porting programs of this nature to a new environment introduces more obstacles than porting a single language program. Generally, an organization that has multiple language programs is intent on standardizing on one particular language and converting as many of the programs as possible to that language.

However, dealing with multiple porting components, a platform migration, a performance impact, and a language migration can quickly become very complex. As stated above, utilizing commercial libraries can ease this transition significantly. Some commercial libraries offer the same computational algorithms in multiple languages.

Therefore if a company has a program that utilizes an interpolation function from a library in a Fortran application and they choose to migrate to C, then they may reference the C version of that function without compromising the accuracy of the calculation. When factoring in a cluster environment in this scenario, the application performance also comes into question. Will the language-converted application perform well on the new system?

Again, third party technology should help in this area. If the application was written in-house or uses embedded public domain software, all of that code would need to be rewritten in the new language and optimized for performance. By relying on optimized versions of software from a vendor, the amount of recoding will be reduced and leveraging the performance of the new system may be automatic.

Libraries Are Gaining More Power in Parallel Processing

As compute clusters become more prevalent and powerful, computational libraries continue to evolve to assist the developer in leveraging the cluster technology. Building parallel processing-enabled applications can be difficult, and commercial libraries can help programmers avoid some of the issues associated with optimizing code for a cluster. In fact, some libraries have introduced techniques that assist not only the sophisticated programmer but also the novice distributed computing developer. Examples of such features include:

  • Functions to initialize the MPI environment and perform computations with minimal exposure to intricacies of MPI. (In general, the maturing of the MPI component of cluster-based solutions has resulted in fewer porting issues for developers.)
  • Functions to simplify the movement and formatting of data for use in an MPI environment.
  • Error checking techniques that not only provide descriptive error messages but also track the location of the error in a parallel environment.

In addition to the capabilities and techniques described above, another benefit of commercial libraries is simply risk reduction. The commercial library vendor will grow the capabilities of their library while continuing to address the computational accuracy, portability and language issues discussed earlier in this article.

Considerations for Clustering

To recap, before organizations take advantage of a compute cluster, they need to consider what kind of proprietary applications they already have, and what level of developer expertise exists to properly recompile, test, debug, and possibly convert to a new language before porting them to cluster systems.

Companies should also choose native libraries that can operate with a range of computing environments to avoid the pitfalls of porting, while still taking advantage of all the benefits of cluster systems. Knowing some of these pitfalls and complexities of porting to cluster systems will, in the long run, help companies save time and money.

—–

About the Author

Tim Leite is the Director of Corporate Development and Educational Programs for Visual Numerics, Inc. Tim is responsible for many of the product related corporate partnerships. In his education role, he is responsible for establishing partnerships with academic institutions and facilitating the computational requirements of researchers and instructors within the academic community.

Tim has been with Visual Numerics for 21 years in various roles. He started as a mathematical programmer working with algorithms in the areas of linear algebra, transforms, nonlinear systems of equations, and numerical optimization. He was also responsible for optimizing algorithm performance for high performance computing systems. Other roles at Visual Numerics included Technical Support Manager, Product Manager, and Software Development Director.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputers Streamline Prediction of Dangerous Arrhythmia

June 2, 2020

Heart arrhythmia can prove deadly, contributing to the hundreds of thousands of deaths from cardiac arrest in the U.S. every year. Unfortunately, many of those arrhythmia are induced as side effects from various medicati Read more…

By Staff report

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of computing capability in support of data analysis and AI workload Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been instrumental to AMD’s datacenter market resurgence. Nanomet Read more…

By Doug Black

Supercomputer-Powered Protein Simulations Approach Lab Accuracy

June 1, 2020

Protein simulations have dominated the supercomputing conversation of late as supercomputers around the world race to simulate the viral proteins of COVID-19 as accurately as possible and simulate potential bindings in t Read more…

By Oliver Peckham

HPC Career Notes: June 2020 Edition

June 1, 2020

In this monthly feature, we'll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it's a promotion, new company hire, or even an accolade, we've got Read more…

By Mariana Iriarte

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

Supercomputer Modeling Shows How COVID-19 Spreads Through Populations

May 30, 2020

As many states begin to loosen the lockdowns and stay-at-home orders that have forced most Americans inside for the past two months, researchers are poring over the data, looking for signs of the dreaded second peak of t Read more…

By Oliver Peckham

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of comp Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This