IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

By John Russell

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler support including a vote of confidence from Google, firing up soon of the Summit supercomputer at Oak Ridge Leadership Computing Facility – Big Blue turned much of its attention to software portability and availability at the OpenPOWER Summit 2018, held last month in Las Vegas.

Chris Sullivan, assistant director for biocomputing, Center for Genome Research and Biocomputing (CGRB), Oregon State University, delivered the message in his keynote, Porting from x86 to OpenPOWER made easy. CGRB, though life sciences centric, serves the broader Oregon State research community and already had 4,000 tools and applications on its standard research x86 cluster before taking the Power plunge. “As we brought Power on we realized we need to do the same thing so we began this process with an undergraduate who I paid $10/hr. This is how easy it is to get this stuff to work. He sat for a month or two compiling the tools and he came up with about 2,000 programs in about two months,” said Sullivan with a bit of dramatic flair.

Readying the software ecosystem is an important step for IBM/OpenPOWER. The big change, of course, was IBM’s decision to expand support for Linux and the little endian format, first on Power8 and then on Power9. IBM had clung to support of big endian format even as Linux and little endian became the preferred approach in science computing. Sullivan said pointedly, “We really were not interested in talking about Power because of the fact that so many of the software packages were written in the context of little endian. [Support for little endian] is the fundamental reason why everybody would start moving to the Power platform.”

Wrangling over ‘endianness’ has been an interesting history. By way of background, this 2015 post[i] by Ron Gordon, a longtime IBMer who is now with consultant Mainline Information System, provides a snapshot of IBM’s thinking back then on little endian support and on targeting of Intel.

“Big Endian and Little Endian are data formats that define data in binary, with the most significant bits in the high order (Big Endian) or low order (Little Endian). Big Endian was the only data format for many years, supported by all systems and architectures. Then, x86 was “invented.” For some reason, they reversed the data bit order, and then we had Little Endian. As it turns out, only x86 is Little Endian but since x86 has the predominate market share, it is the most pervasive, at this time…

“Endianness only pertains to data and not instructions. Compilers of code reflect the Endianness of the application with LE (Little Endian) being the default for x86 compiles, and all others defaulting to BE (Big Endian). Power8 is an exception, in that compilers like XLC, GCC can accept a “compile to” definition of PPC or PPCLE. This would set the Endianness to BE or LE respectively. Now, when you boot a Linux distribution, the OS has to be LE to run LE compiled applications or BE to run BE compiled applications. In Power8, everything actually runs in BE mode, and when data is loaded or stored to memory, an LE application has its data bit structure “flipped” in the registers…so you are treating LE data correctly and transparently. Therefore, Power8 is bi-Endian. Power7 can only run in BE mode.”

IBM has since been working steadily and successfully to attract Linux distributors’ support.

Last November Red Hat announced of Red Hat Enterprise Linux 7.4 support for little endian on Power9: “…In recent months, we have seen interest from customers for solutions based on hardware designs that use IBM Power Little Endian (ppc64le) architecture. Several interesting designs focused on artificial intelligence, machine learning, and advanced analytics are being developed by OpenPOWER members using advanced system interconnect technologies and graphics processing unit (GPU)-aided computing. Because this architecture and the associated ecosystem is still evolving, we plan to continue our work with IBM and the OpenPOWER ecosystem to enable new and refreshed hardware.”

One early adopter of RHEL 7.4 for Power is the Summit supercomputer being installed at Oak Ridge; it’s expected to run five to 10 times faster than its predecessor (Titan). CGRB is a “big CentOS shop” according to Sullivan and also runs Ubuntu.

The end goal, of course, is to attract users such as Sullivan who want easy access to the sea of Linux applications and who also want to take advantage of Power8/9’s high performance, particularly its high-speed interconnect (NVlink, CAPI/OpenCAPI, PCIe 4.0). There are still a few rough spots in Power-Linux compatibility but they are exceptions said Sullivan who pointed a finger at Intel (an intermittent target throughout the OpenPOWER Summit):

  • “There are some problems. We noticed some of the x86 stuff had Intel inserted in the IDEs sse, sse2 memory stuff and the end users and developers had no idea that they were actually putting dependencies that were Intel specific into their code. We’ve been able to communicate to some of those groups and show them the impact because they won’t be able to take advantage of new technologies and they are going through recoding it and actually bringing their code in compliance with working across multiple architectures.”

Aaron Gardner, director of technology for BioTeam research computing consultancy, agreed IBM’s embrace of little endian has been an important step for Power.

“These days the vast majority of Linux on Power is little endian. The reason for this is the impact of not having to refactor code for big endian, especially en masse, makes porting fairly straightforward. For example Google is famous for saying before Power8 they were “struggling” to get their tools going on Power but with the little endian support everything was working within days,” said Gardner. “The thing to note around optimization is that Intel CPUs and compilers have had a heavy influence and presence in recent years. This has produced compiler optimizations and sometimes hand coded assembly routines in programs for memory access that are designed around little endian byte ordering—running Power little endian makes using this code tenable.”

“Regarding general portability, the path between Intel and AMD is fairly frictionless due to shared AMD64 instructions. I agree gcc and clang/llvm are common baselines now across Power, Intel, and AMD—and for most things it should not be difficult to get [them] working especially when autoconf, etc. are employed. For deeper optimizations there are always the Intel compilers as well as the IBM XL compilers. AMD’s free AOCC compiler is based on clang/llvm and until recently has offered little benefit over gcc or upstream clang—though it may offer more significant benefits in the future. IBM XL compilers use the same options as gcc, have improved their overall gcc compatibility, and is fronted by clang as well. This means in many cases these optimized compilers can be used to good effect with minimal rework. I would note that some moves, for example an Intel Fortran compiler optimized program being ported to Power and compiled with IBM’s XL Fortran compiler, will still be costly, but in general over the last 3-5 years the ecosystem has begun to play together much more nicely.”

Interestingly, said Gardner, the challenge moving forward is that many have moved away from compiling things themselves, and rely on third party or crowdsourced repositories. As examples of this trend, Gardner noted supercomputing centers moving to deploying modularized HPC applications using community packages through Conda, Spack, EasyBuild, etc. as opposed to building and optimizing everything themselves. “Indeed efforts to bring Power alongside Intel and AMD architectures in these community repositories is the next step to close the portability gap that remains,” said Gardner.

CGRB is an interesting proof point for IBM. Cost and performance are both drivers according to Sullivan. CGRB is a large heterogeneous environment, that runs roughly 20,000 jobs a day, has nearly 5,000 processors, more than four petabytes of useable redundant storage, and generates 4-9 terabytes of data per day from different groups. Data mining and data processing are among CGRB’s priorities.

“We have lots of machines with greater than a terabyte of RAM because that helps change the scope [of what we can do]. We have six Power8 systems and we are continuing to buy them because they’ve allowed us to increase the scope of data we include in analysis, both in terms of the number of threads and in terms of moving data across the bus,” said Sullivan. “The bus speeds are really what changes and transforms our ability to work. I have groups that go out and mine data from the oceans and generate 80 TB of data a week [and] I have a quarter petabyte of data or so coming from owl sounds in the forest. We have to try to reduce processing times from months to weeks otherwise. We also need to run multiple tools at the same time.”

Sullivan didn’t identify the interface researchers use to submit job but said the system has been architected so that “all the software is able to identify the architecture” and provide the correct environment variables. Users “can blindly submit jobs,” said Sullivan, adding higher throughput, is what drives lower cost and that it has also started researchers thinking how to better take advantage of the platform. Link to Sullivan’s keynote is below.

Link to Sullivan video: https://youtu.be/-hq8utGE-oU

[i]https://www.mainline.com/linux-on-power-to-be-or-not-to-be-why-should-i-care/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputers Generate Universes to Illuminate Galactic Formation

August 20, 2019

With advanced imaging and satellite technologies, it’s easier than ever to see a galaxy – but understanding how they form (a process that can take billions of years) is a different story. Now, a team of researchers f Read more…

By Oliver Peckham

Singularity Moves Up the Container Value Chain

August 20, 2019

The enterprise version of the Singularity HPC container platform released this week by Sylabs is designed to allow users to create, secure and share the high-end containers in self-hosted production deployments. The e Read more…

By George Leopold

IBM Deepens Plunge into Open Source; OpenPOWER to Join Linux Foundation

August 20, 2019

IBM today announced it was contributing the instruction set (ISA) for its Power microprocessor and the designs for the Open Coherent Accelerator Processor Interface (OpenCAPI) and Open Memory Interface (OMI) to the Linux Read more…

By John Russell

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Keys to Attracting the Newest HPC Talent – Post-Millennials

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

For engineers and scientists growing up in the 80s, the current state of HPC makes perfect sense. Read more…

Stampede2 ‘Shocks’ with New Shock Turbulence Insights

August 19, 2019

Shockwaves play roles in everything from high-speed aircraft to supernovae – and now, supercomputer-powered research from the Texas A&M University and the Texas Advanced Computing Center (TACC) is helping to shed l Read more…

By Oliver Peckham

IBM Deepens Plunge into Open Source; OpenPOWER to Join Linux Foundation

August 20, 2019

IBM today announced it was contributing the instruction set (ISA) for its Power microprocessor and the designs for the Open Coherent Accelerator Processor Inter Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

Building Diversity and Broader Engagement in the HPC Community

August 7, 2019

Increasing diversity and inclusion in HPC is a community-building effort. Representation of both issues and individuals matters - the more people see HPC in a w Read more…

By AJ Lauer

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This