IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

By John Russell

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler support including a vote of confidence from Google, firing up soon of the Summit supercomputer at Oak Ridge Leadership Computing Facility – Big Blue turned much of its attention to software portability and availability at the OpenPOWER Summit 2018, held last month in Las Vegas.

Chris Sullivan, assistant director for biocomputing, Center for Genome Research and Biocomputing (CGRB), Oregon State University, delivered the message in his keynote, Porting from x86 to OpenPOWER made easy. CGRB, though life sciences centric, serves the broader Oregon State research community and already had 4,000 tools and applications on its standard research x86 cluster before taking the Power plunge. “As we brought Power on we realized we need to do the same thing so we began this process with an undergraduate who I paid $10/hr. This is how easy it is to get this stuff to work. He sat for a month or two compiling the tools and he came up with about 2,000 programs in about two months,” said Sullivan with a bit of dramatic flair.

Readying the software ecosystem is an important step for IBM/OpenPOWER. The big change, of course, was IBM’s decision to expand support for Linux and the little endian format, first on Power8 and then on Power9. IBM had clung to support of big endian format even as Linux and little endian became the preferred approach in science computing. Sullivan said pointedly, “We really were not interested in talking about Power because of the fact that so many of the software packages were written in the context of little endian. [Support for little endian] is the fundamental reason why everybody would start moving to the Power platform.”

Wrangling over ‘endianness’ has been an interesting history. By way of background, this 2015 post[i] by Ron Gordon, a longtime IBMer who is now with consultant Mainline Information System, provides a snapshot of IBM’s thinking back then on little endian support and on targeting of Intel.

“Big Endian and Little Endian are data formats that define data in binary, with the most significant bits in the high order (Big Endian) or low order (Little Endian). Big Endian was the only data format for many years, supported by all systems and architectures. Then, x86 was “invented.” For some reason, they reversed the data bit order, and then we had Little Endian. As it turns out, only x86 is Little Endian but since x86 has the predominate market share, it is the most pervasive, at this time…

“Endianness only pertains to data and not instructions. Compilers of code reflect the Endianness of the application with LE (Little Endian) being the default for x86 compiles, and all others defaulting to BE (Big Endian). Power8 is an exception, in that compilers like XLC, GCC can accept a “compile to” definition of PPC or PPCLE. This would set the Endianness to BE or LE respectively. Now, when you boot a Linux distribution, the OS has to be LE to run LE compiled applications or BE to run BE compiled applications. In Power8, everything actually runs in BE mode, and when data is loaded or stored to memory, an LE application has its data bit structure “flipped” in the registers…so you are treating LE data correctly and transparently. Therefore, Power8 is bi-Endian. Power7 can only run in BE mode.”

IBM has since been working steadily and successfully to attract Linux distributors’ support.

Last November Red Hat announced of Red Hat Enterprise Linux 7.4 support for little endian on Power9: “…In recent months, we have seen interest from customers for solutions based on hardware designs that use IBM Power Little Endian (ppc64le) architecture. Several interesting designs focused on artificial intelligence, machine learning, and advanced analytics are being developed by OpenPOWER members using advanced system interconnect technologies and graphics processing unit (GPU)-aided computing. Because this architecture and the associated ecosystem is still evolving, we plan to continue our work with IBM and the OpenPOWER ecosystem to enable new and refreshed hardware.”

One early adopter of RHEL 7.4 for Power is the Summit supercomputer being installed at Oak Ridge; it’s expected to run five to 10 times faster than its predecessor (Titan). CGRB is a “big CentOS shop” according to Sullivan and also runs Ubuntu.

The end goal, of course, is to attract users such as Sullivan who want easy access to the sea of Linux applications and who also want to take advantage of Power8/9’s high performance, particularly its high-speed interconnect (NVlink, CAPI/OpenCAPI, PCIe 4.0). There are still a few rough spots in Power-Linux compatibility but they are exceptions said Sullivan who pointed a finger at Intel (an intermittent target throughout the OpenPOWER Summit):

  • “There are some problems. We noticed some of the x86 stuff had Intel inserted in the IDEs sse, sse2 memory stuff and the end users and developers had no idea that they were actually putting dependencies that were Intel specific into their code. We’ve been able to communicate to some of those groups and show them the impact because they won’t be able to take advantage of new technologies and they are going through recoding it and actually bringing their code in compliance with working across multiple architectures.”

Aaron Gardner, director of technology for BioTeam research computing consultancy, agreed IBM’s embrace of little endian has been an important step for Power.

“These days the vast majority of Linux on Power is little endian. The reason for this is the impact of not having to refactor code for big endian, especially en masse, makes porting fairly straightforward. For example Google is famous for saying before Power8 they were “struggling” to get their tools going on Power but with the little endian support everything was working within days,” said Gardner. “The thing to note around optimization is that Intel CPUs and compilers have had a heavy influence and presence in recent years. This has produced compiler optimizations and sometimes hand coded assembly routines in programs for memory access that are designed around little endian byte ordering—running Power little endian makes using this code tenable.”

“Regarding general portability, the path between Intel and AMD is fairly frictionless due to shared AMD64 instructions. I agree gcc and clang/llvm are common baselines now across Power, Intel, and AMD—and for most things it should not be difficult to get [them] working especially when autoconf, etc. are employed. For deeper optimizations there are always the Intel compilers as well as the IBM XL compilers. AMD’s free AOCC compiler is based on clang/llvm and until recently has offered little benefit over gcc or upstream clang—though it may offer more significant benefits in the future. IBM XL compilers use the same options as gcc, have improved their overall gcc compatibility, and is fronted by clang as well. This means in many cases these optimized compilers can be used to good effect with minimal rework. I would note that some moves, for example an Intel Fortran compiler optimized program being ported to Power and compiled with IBM’s XL Fortran compiler, will still be costly, but in general over the last 3-5 years the ecosystem has begun to play together much more nicely.”

Interestingly, said Gardner, the challenge moving forward is that many have moved away from compiling things themselves, and rely on third party or crowdsourced repositories. As examples of this trend, Gardner noted supercomputing centers moving to deploying modularized HPC applications using community packages through Conda, Spack, EasyBuild, etc. as opposed to building and optimizing everything themselves. “Indeed efforts to bring Power alongside Intel and AMD architectures in these community repositories is the next step to close the portability gap that remains,” said Gardner.

CGRB is an interesting proof point for IBM. Cost and performance are both drivers according to Sullivan. CGRB is a large heterogeneous environment, that runs roughly 20,000 jobs a day, has nearly 5,000 processors, more than four petabytes of useable redundant storage, and generates 4-9 terabytes of data per day from different groups. Data mining and data processing are among CGRB’s priorities.

“We have lots of machines with greater than a terabyte of RAM because that helps change the scope [of what we can do]. We have six Power8 systems and we are continuing to buy them because they’ve allowed us to increase the scope of data we include in analysis, both in terms of the number of threads and in terms of moving data across the bus,” said Sullivan. “The bus speeds are really what changes and transforms our ability to work. I have groups that go out and mine data from the oceans and generate 80 TB of data a week [and] I have a quarter petabyte of data or so coming from owl sounds in the forest. We have to try to reduce processing times from months to weeks otherwise. We also need to run multiple tools at the same time.”

Sullivan didn’t identify the interface researchers use to submit job but said the system has been architected so that “all the software is able to identify the architecture” and provide the correct environment variables. Users “can blindly submit jobs,” said Sullivan, adding higher throughput, is what drives lower cost and that it has also started researchers thinking how to better take advantage of the platform. Link to Sullivan’s keynote is below.

Link to Sullivan video: https://youtu.be/-hq8utGE-oU

[i]https://www.mainline.com/linux-on-power-to-be-or-not-to-be-why-should-i-care/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Pfizer HPC Engineer Aims to Automate Software Stack Testing

January 17, 2019

Seeking to reign in the tediousness of manual software testing, Pfizer HPC Engineer Shahzeb Siddiqui is developing an open source software tool called buildtest, aimed at automating software stack testing by providing the community with a central repository of tests for common HPC apps and the ability to automate execution of testing. Read more…

By Tiffany Trader

Senegal Prepares to Take Delivery of Atos Supercomputer

January 16, 2019

In just a few months time, Senegal will be operating the second largest HPC system in sub-Saharan Africa. The Minister of Higher Education, Research and Innovation Mary Teuw Niane made the announcement... Read more…

By Tiffany Trader

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three big public cloud vendors has by turn touted the latest and Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Resource Management in the Age of Artificial Intelligence

New challenges demand fresh approaches

Fueled by GPUs, big data, and rapid advances in software, the AI revolution is upon us. Read more…

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performanc Read more…

By John Russell

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three Read more…

By Tiffany Trader

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchm Read more…

By John Russell

A Big Data Journey While Seeking to Catalog our Universe

January 16, 2019

It turns out, astronomers have lots of photos of the sky but seek knowledge about what the photos mean. Sound familiar? Big data problems are often characterize Read more…

By James Reinders

Intel Bets Big on 2-Track Quantum Strategy

January 15, 2019

Quantum computing has lived so long in the future it’s taken on a futuristic life of its own, with a Gartner-style hype cycle that includes triggers of innovation, inflated expectations and – though a useful quantum system is still years away – anticipatory troughs of disillusionment. Read more…

By Doug Black

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM’s New Global Weather Forecasting System Runs on GPUs

January 9, 2019

Anyone who has checked a forecast to decide whether or not to pack an umbrella knows that weather prediction can be a mercurial endeavor. It is a Herculean task: the constant modeling of incredibly complex systems to a high degree of accuracy at a local level within very short spans of time. Read more…

By Oliver Peckham

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This