Compilers and More: Industrial Strength Interprocedural Analysis

By Michael Wolfe

March 16, 2007

Standard compiler optimization is no longer sufficient for competitive high performance computing. Here we discuss interprocedural analysis (IPA) or whole program analysis, its costs and benefits, and how it affects programmers.

Performance-sensitive programmers are accustomed to building their applications with compiler optimizations enabled. In the past, this might have been as simple as setting the -O option on the command line. Decades of research and experience on compiler analysis and code improvement have produced mature, reliable techniques, the vast majority of which focus on optimizing a single procedure at a time, using redundancy elimination, loop restructuring, register allocation, instruction scheduling, and so on. But it is not enough.

Current highly optimizing compilers all use some form of interprocedural or whole program analysis for best performance. At compile time, the compiler summarizes each procedure in the program; when all procedures are available, the compiler invokes an interprocedural analysis module to collect all the procedure summaries and propagates information from caller to callee and back. While this seems to break the advantages of separate compilation, it is done at link time. The procedures are then optimized using the new interprocedural information. Early implementations used programming environments or special build programs to manage the procedure summaries, which made it hard to migrate from traditional tools, such as make. Current methods are almost invisible, except for the extra time spent at the link step to generate better code using the extra information.

The importance of interprocedural analysis is demonstrated by looking at the SPEC CPU results page (http://www.spec.org/); the base flags for the various compilers all include IPA:

    IBM -O5 (implies -qipa)
    Intel -fast (implies -ipo)
    Pathscale -Ofast (implies -ipa)
    PGI -fast -Mipa=fast,inline
    SGI -Ofast=ip35 (implies -IPA)
    Sun -fast -xcrossfile

We ran the SPEC CPU2000 test suite using the PGI compiler with and without IPA. The performance improvements ranged up to 130 percent speedup, with a 7 percent speedup in the overall geometric mean, demonstrating that IPA is useful and critical to the performance of some applications.

One of the most useful and common benefits of IPA is automatic inlining of procedures, even across source files. Since the compiler has the whole program at link time, it can take a procedure from one object and inline it at a call site in another procedure. This typically reduces the cost of the procedure call, and allows the code for the inlined procedure to be better optimized since the calling context is explicit.

This can also be used to inline or generate special code for calls to system or math libraries. Until link time, it isn’t always known what library a particular procedure will come from. Once it is known that fmax is resolved from libm.a, for instance, the compiler can replace the procedure call by fast inline code.

A less common technique is to create two or more versions of a procedure, each version optimized for a particular calling context. For instance, IPA may generate one version or clone to be optimized for the case when two C pointer arguments are known to be distinct, allowing more vectorization (for instance), and another version for the more general case. The compiler can be directed to replace some calls to the more optimized version where appropriate.

IPA can help optimize around procedure calls even when the call isn’t inlined. IPA may be able to determine that the function is “pure,” meaning that it does no I/O and doesn’t read or write global variables. Code around calls to such functions can be moved above or below the call, since the call won’t interfere with any other code in the caller. This gives the compiler more freedom when scheduling instructions or allocating registers.

Another very common and simple benefit of IPA is recognizing when a procedure argument always has the same value, and replacing the argument by that value to optimize the procedure. The value may be a constant integer, used as a loop limit or in a conditional expression, allowing more aggressive loop optimization or removal of the condition. For C pointer arguments, the value may be an array; using the array directly allows more precise alias analysis, with much of the same benefits as using the C99 ‘restrict’ qualifier. If the constant value is propagated to the procedure, then the caller doesn’t even need to pass that argument, making the procedure call slightly less expensive as well. Even when a procedure argument is not a single constant, it can be useful for the compiler to know when the value of an argument lies in a certain range, or that a pointer only aliases with a limited number of user arrays or variables.

For array arguments (or C pointers to arrays), it is useful on today’s machines to know the alignment of the argument. For instance, to use the packed (SSE) instructions on the x86 or x86-64 architectures, aligned loads can only be used if the data is known to be 16-byte aligned. Knowing the argument alignment allows better code generation for vectorized loops. For dynamically allocated arrays, this means knowing the return alignment of the memory allocation routines.

Modern Fortran includes multidimensional assumed-shape array arguments, which require so-called “dope vectors” to describe the bounds and strides for each dimension. In the general case, the compiler must read these dope vectors for each dimension for each access to the array. IPA can be used to propagate array shapes, replacing dope vector accesses by constant array bounds. This eliminates the dope vector memory accesses, and allows more constant folding at compile time.

Since IPA has a view of the whole program, it can be used to reorganize data in the program as well. One simple example is to reorder the members of a Fortran COMMON block to control the data alignment. This is only safe with IPA, since only then does the compiler know that all instances of the COMMON will be reordered the same way.

Each implementation of IPA is somewhat different. Some compilers depend on interprocedural optimization for performance, while others use IPA mostly to control function inlining. The cost for IPA is spent in program build time, in particular at link time. Some implementations defer essentially all optimization and code generation to link time. This makes the compiler seem quite fast, and the link very slow.

To be accepted by users, the interface to IPA must preserve the edit-compile-link-test development cycle currently in place. Users are unlikely to adopt a new program build mechanism, as proposed in various research projects. IPA must be able to analyze routines in subroutine libraries, which most vendors do not yet fully support. The compiler also needs to know the behavior of routines in the system library, such as ‘malloc’, I/O, and math routines.

Processor architectures have become quite aggressive, with deeply pipelined function units, superscalar instruction issue, out-of-order execution, and integrated vector/multimedia processing units, things which would never have been dreamed of as mainstream during the RISC revolution, for instance. The future offers multicore processors, with heterogeneous cores and function unit customization. Successful performance delivery depends on deep compiler analysis and optimization, which will become more dependent on interprocedural analysis. Bleeding edge programmers need to understand its benefits, potential, and limitations. As has always been true, the best performance is produced when the programmer and the compiler enter into a dialogue, which I will address in a future column.

SPEC (R) is a registered trademark of the Standard Performance Evaluation Corporation (http://www.spec.org/).

—–

Michael Wolfe has developed compilers for over 30 years in both academia and industry, and is now a senior compiler engineer at The Portland Group, Inc. (www.pgroup.com), a wholly-owned subsidiary of STMicroelectronics, Inc. The opinions stated here are those of the author, and do not represent opinions of The Portland Group, Inc. or STMicroelectronics, Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visitors to the Colorado Convention Center in Denver for the larg Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some cases, city managers didn’t even know existed. Speaking Read more…

By Doug Black

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

SC17 Student Cluster Competition Configurations: Fewer Nodes, Way More Accelerators

November 16, 2017

The final configurations for each of the SC17 “Donnybrook in Denver” Student Cluster Competition have been released. Fortunately, each team received their equipment shipments on time and undamaged, so the teams are r Read more…

By Dan Olds

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

HPE Launches ARM-based Apollo System for HPC, AI

November 14, 2017

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-base Read more…

By Doug Black

OpenACC Shines in Global Climate/Weather Codes

November 14, 2017

OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This