IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

By John Russell

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler support including a vote of confidence from Google, firing up soon of the Summit supercomputer at Oak Ridge Leadership Computing Facility – Big Blue turned much of its attention to software portability and availability at the OpenPOWER Summit 2018, held last month in Las Vegas.

Chris Sullivan, assistant director for biocomputing, Center for Genome Research and Biocomputing (CGRB), Oregon State University, delivered the message in his keynote, Porting from x86 to OpenPOWER made easy. CGRB, though life sciences centric, serves the broader Oregon State research community and already had 4,000 tools and applications on its standard research x86 cluster before taking the Power plunge. “As we brought Power on we realized we need to do the same thing so we began this process with an undergraduate who I paid $10/hr. This is how easy it is to get this stuff to work. He sat for a month or two compiling the tools and he came up with about 2,000 programs in about two months,” said Sullivan with a bit of dramatic flair.

Readying the software ecosystem is an important step for IBM/OpenPOWER. The big change, of course, was IBM’s decision to expand support for Linux and the little endian format, first on Power8 and then on Power9. IBM had clung to support of big endian format even as Linux and little endian became the preferred approach in science computing. Sullivan said pointedly, “We really were not interested in talking about Power because of the fact that so many of the software packages were written in the context of little endian. [Support for little endian] is the fundamental reason why everybody would start moving to the Power platform.”

Wrangling over ‘endianness’ has been an interesting history. By way of background, this 2015 post[i] by Ron Gordon, a longtime IBMer who is now with consultant Mainline Information System, provides a snapshot of IBM’s thinking back then on little endian support and on targeting of Intel.

“Big Endian and Little Endian are data formats that define data in binary, with the most significant bits in the high order (Big Endian) or low order (Little Endian). Big Endian was the only data format for many years, supported by all systems and architectures. Then, x86 was “invented.” For some reason, they reversed the data bit order, and then we had Little Endian. As it turns out, only x86 is Little Endian but since x86 has the predominate market share, it is the most pervasive, at this time…

“Endianness only pertains to data and not instructions. Compilers of code reflect the Endianness of the application with LE (Little Endian) being the default for x86 compiles, and all others defaulting to BE (Big Endian). Power8 is an exception, in that compilers like XLC, GCC can accept a “compile to” definition of PPC or PPCLE. This would set the Endianness to BE or LE respectively. Now, when you boot a Linux distribution, the OS has to be LE to run LE compiled applications or BE to run BE compiled applications. In Power8, everything actually runs in BE mode, and when data is loaded or stored to memory, an LE application has its data bit structure “flipped” in the registers…so you are treating LE data correctly and transparently. Therefore, Power8 is bi-Endian. Power7 can only run in BE mode.”

IBM has since been working steadily and successfully to attract Linux distributors’ support.

Last November Red Hat announced of Red Hat Enterprise Linux 7.4 support for little endian on Power9: “…In recent months, we have seen interest from customers for solutions based on hardware designs that use IBM Power Little Endian (ppc64le) architecture. Several interesting designs focused on artificial intelligence, machine learning, and advanced analytics are being developed by OpenPOWER members using advanced system interconnect technologies and graphics processing unit (GPU)-aided computing. Because this architecture and the associated ecosystem is still evolving, we plan to continue our work with IBM and the OpenPOWER ecosystem to enable new and refreshed hardware.”

One early adopter of RHEL 7.4 for Power is the Summit supercomputer being installed at Oak Ridge; it’s expected to run five to 10 times faster than its predecessor (Titan). CGRB is a “big CentOS shop” according to Sullivan and also runs Ubuntu.

The end goal, of course, is to attract users such as Sullivan who want easy access to the sea of Linux applications and who also want to take advantage of Power8/9’s high performance, particularly its high-speed interconnect (NVlink, CAPI/OpenCAPI, PCIe 4.0). There are still a few rough spots in Power-Linux compatibility but they are exceptions said Sullivan who pointed a finger at Intel (an intermittent target throughout the OpenPOWER Summit):

  • “There are some problems. We noticed some of the x86 stuff had Intel inserted in the IDEs sse, sse2 memory stuff and the end users and developers had no idea that they were actually putting dependencies that were Intel specific into their code. We’ve been able to communicate to some of those groups and show them the impact because they won’t be able to take advantage of new technologies and they are going through recoding it and actually bringing their code in compliance with working across multiple architectures.”

Aaron Gardner, director of technology for BioTeam research computing consultancy, agreed IBM’s embrace of little endian has been an important step for Power.

“These days the vast majority of Linux on Power is little endian. The reason for this is the impact of not having to refactor code for big endian, especially en masse, makes porting fairly straightforward. For example Google is famous for saying before Power8 they were “struggling” to get their tools going on Power but with the little endian support everything was working within days,” said Gardner. “The thing to note around optimization is that Intel CPUs and compilers have had a heavy influence and presence in recent years. This has produced compiler optimizations and sometimes hand coded assembly routines in programs for memory access that are designed around little endian byte ordering—running Power little endian makes using this code tenable.”

“Regarding general portability, the path between Intel and AMD is fairly frictionless due to shared AMD64 instructions. I agree gcc and clang/llvm are common baselines now across Power, Intel, and AMD—and for most things it should not be difficult to get [them] working especially when autoconf, etc. are employed. For deeper optimizations there are always the Intel compilers as well as the IBM XL compilers. AMD’s free AOCC compiler is based on clang/llvm and until recently has offered little benefit over gcc or upstream clang—though it may offer more significant benefits in the future. IBM XL compilers use the same options as gcc, have improved their overall gcc compatibility, and is fronted by clang as well. This means in many cases these optimized compilers can be used to good effect with minimal rework. I would note that some moves, for example an Intel Fortran compiler optimized program being ported to Power and compiled with IBM’s XL Fortran compiler, will still be costly, but in general over the last 3-5 years the ecosystem has begun to play together much more nicely.”

Interestingly, said Gardner, the challenge moving forward is that many have moved away from compiling things themselves, and rely on third party or crowdsourced repositories. As examples of this trend, Gardner noted supercomputing centers moving to deploying modularized HPC applications using community packages through Conda, Spack, EasyBuild, etc. as opposed to building and optimizing everything themselves. “Indeed efforts to bring Power alongside Intel and AMD architectures in these community repositories is the next step to close the portability gap that remains,” said Gardner.

CGRB is an interesting proof point for IBM. Cost and performance are both drivers according to Sullivan. CGRB is a large heterogeneous environment, that runs roughly 20,000 jobs a day, has nearly 5,000 processors, more than four petabytes of useable redundant storage, and generates 4-9 terabytes of data per day from different groups. Data mining and data processing are among CGRB’s priorities.

“We have lots of machines with greater than a terabyte of RAM because that helps change the scope [of what we can do]. We have six Power8 systems and we are continuing to buy them because they’ve allowed us to increase the scope of data we include in analysis, both in terms of the number of threads and in terms of moving data across the bus,” said Sullivan. “The bus speeds are really what changes and transforms our ability to work. I have groups that go out and mine data from the oceans and generate 80 TB of data a week [and] I have a quarter petabyte of data or so coming from owl sounds in the forest. We have to try to reduce processing times from months to weeks otherwise. We also need to run multiple tools at the same time.”

Sullivan didn’t identify the interface researchers use to submit job but said the system has been architected so that “all the software is able to identify the architecture” and provide the correct environment variables. Users “can blindly submit jobs,” said Sullivan, adding higher throughput, is what drives lower cost and that it has also started researchers thinking how to better take advantage of the platform. Link to Sullivan’s keynote is below.

Link to Sullivan video: https://youtu.be/-hq8utGE-oU

[i]https://www.mainline.com/linux-on-power-to-be-or-not-to-be-why-should-i-care/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire