IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

By John Russell

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler support including a vote of confidence from Google, firing up soon of the Summit supercomputer at Oak Ridge Leadership Computing Facility – Big Blue turned much of its attention to software portability and availability at the OpenPOWER Summit 2018, held last month in Las Vegas.

Chris Sullivan, assistant director for biocomputing, Center for Genome Research and Biocomputing (CGRB), Oregon State University, delivered the message in his keynote, Porting from x86 to OpenPOWER made easy. CGRB, though life sciences centric, serves the broader Oregon State research community and already had 4,000 tools and applications on its standard research x86 cluster before taking the Power plunge. “As we brought Power on we realized we need to do the same thing so we began this process with an undergraduate who I paid $10/hr. This is how easy it is to get this stuff to work. He sat for a month or two compiling the tools and he came up with about 2,000 programs in about two months,” said Sullivan with a bit of dramatic flair.

Readying the software ecosystem is an important step for IBM/OpenPOWER. The big change, of course, was IBM’s decision to expand support for Linux and the little endian format, first on Power8 and then on Power9. IBM had clung to support of big endian format even as Linux and little endian became the preferred approach in science computing. Sullivan said pointedly, “We really were not interested in talking about Power because of the fact that so many of the software packages were written in the context of little endian. [Support for little endian] is the fundamental reason why everybody would start moving to the Power platform.”

Wrangling over ‘endianness’ has been an interesting history. By way of background, this 2015 post[i] by Ron Gordon, a longtime IBMer who is now with consultant Mainline Information System, provides a snapshot of IBM’s thinking back then on little endian support and on targeting of Intel.

“Big Endian and Little Endian are data formats that define data in binary, with the most significant bits in the high order (Big Endian) or low order (Little Endian). Big Endian was the only data format for many years, supported by all systems and architectures. Then, x86 was “invented.” For some reason, they reversed the data bit order, and then we had Little Endian. As it turns out, only x86 is Little Endian but since x86 has the predominate market share, it is the most pervasive, at this time…

“Endianness only pertains to data and not instructions. Compilers of code reflect the Endianness of the application with LE (Little Endian) being the default for x86 compiles, and all others defaulting to BE (Big Endian). Power8 is an exception, in that compilers like XLC, GCC can accept a “compile to” definition of PPC or PPCLE. This would set the Endianness to BE or LE respectively. Now, when you boot a Linux distribution, the OS has to be LE to run LE compiled applications or BE to run BE compiled applications. In Power8, everything actually runs in BE mode, and when data is loaded or stored to memory, an LE application has its data bit structure “flipped” in the registers…so you are treating LE data correctly and transparently. Therefore, Power8 is bi-Endian. Power7 can only run in BE mode.”

IBM has since been working steadily and successfully to attract Linux distributors’ support.

Last November Red Hat announced of Red Hat Enterprise Linux 7.4 support for little endian on Power9: “…In recent months, we have seen interest from customers for solutions based on hardware designs that use IBM Power Little Endian (ppc64le) architecture. Several interesting designs focused on artificial intelligence, machine learning, and advanced analytics are being developed by OpenPOWER members using advanced system interconnect technologies and graphics processing unit (GPU)-aided computing. Because this architecture and the associated ecosystem is still evolving, we plan to continue our work with IBM and the OpenPOWER ecosystem to enable new and refreshed hardware.”

One early adopter of RHEL 7.4 for Power is the Summit supercomputer being installed at Oak Ridge; it’s expected to run five to 10 times faster than its predecessor (Titan). CGRB is a “big CentOS shop” according to Sullivan and also runs Ubuntu.

The end goal, of course, is to attract users such as Sullivan who want easy access to the sea of Linux applications and who also want to take advantage of Power8/9’s high performance, particularly its high-speed interconnect (NVlink, CAPI/OpenCAPI, PCIe 4.0). There are still a few rough spots in Power-Linux compatibility but they are exceptions said Sullivan who pointed a finger at Intel (an intermittent target throughout the OpenPOWER Summit):

  • “There are some problems. We noticed some of the x86 stuff had Intel inserted in the IDEs sse, sse2 memory stuff and the end users and developers had no idea that they were actually putting dependencies that were Intel specific into their code. We’ve been able to communicate to some of those groups and show them the impact because they won’t be able to take advantage of new technologies and they are going through recoding it and actually bringing their code in compliance with working across multiple architectures.”

Aaron Gardner, director of technology for BioTeam research computing consultancy, agreed IBM’s embrace of little endian has been an important step for Power.

“These days the vast majority of Linux on Power is little endian. The reason for this is the impact of not having to refactor code for big endian, especially en masse, makes porting fairly straightforward. For example Google is famous for saying before Power8 they were “struggling” to get their tools going on Power but with the little endian support everything was working within days,” said Gardner. “The thing to note around optimization is that Intel CPUs and compilers have had a heavy influence and presence in recent years. This has produced compiler optimizations and sometimes hand coded assembly routines in programs for memory access that are designed around little endian byte ordering—running Power little endian makes using this code tenable.”

“Regarding general portability, the path between Intel and AMD is fairly frictionless due to shared AMD64 instructions. I agree gcc and clang/llvm are common baselines now across Power, Intel, and AMD—and for most things it should not be difficult to get [them] working especially when autoconf, etc. are employed. For deeper optimizations there are always the Intel compilers as well as the IBM XL compilers. AMD’s free AOCC compiler is based on clang/llvm and until recently has offered little benefit over gcc or upstream clang—though it may offer more significant benefits in the future. IBM XL compilers use the same options as gcc, have improved their overall gcc compatibility, and is fronted by clang as well. This means in many cases these optimized compilers can be used to good effect with minimal rework. I would note that some moves, for example an Intel Fortran compiler optimized program being ported to Power and compiled with IBM’s XL Fortran compiler, will still be costly, but in general over the last 3-5 years the ecosystem has begun to play together much more nicely.”

Interestingly, said Gardner, the challenge moving forward is that many have moved away from compiling things themselves, and rely on third party or crowdsourced repositories. As examples of this trend, Gardner noted supercomputing centers moving to deploying modularized HPC applications using community packages through Conda, Spack, EasyBuild, etc. as opposed to building and optimizing everything themselves. “Indeed efforts to bring Power alongside Intel and AMD architectures in these community repositories is the next step to close the portability gap that remains,” said Gardner.

CGRB is an interesting proof point for IBM. Cost and performance are both drivers according to Sullivan. CGRB is a large heterogeneous environment, that runs roughly 20,000 jobs a day, has nearly 5,000 processors, more than four petabytes of useable redundant storage, and generates 4-9 terabytes of data per day from different groups. Data mining and data processing are among CGRB’s priorities.

“We have lots of machines with greater than a terabyte of RAM because that helps change the scope [of what we can do]. We have six Power8 systems and we are continuing to buy them because they’ve allowed us to increase the scope of data we include in analysis, both in terms of the number of threads and in terms of moving data across the bus,” said Sullivan. “The bus speeds are really what changes and transforms our ability to work. I have groups that go out and mine data from the oceans and generate 80 TB of data a week [and] I have a quarter petabyte of data or so coming from owl sounds in the forest. We have to try to reduce processing times from months to weeks otherwise. We also need to run multiple tools at the same time.”

Sullivan didn’t identify the interface researchers use to submit job but said the system has been architected so that “all the software is able to identify the architecture” and provide the correct environment variables. Users “can blindly submit jobs,” said Sullivan, adding higher throughput, is what drives lower cost and that it has also started researchers thinking how to better take advantage of the platform. Link to Sullivan’s keynote is below.

Link to Sullivan video: https://youtu.be/-hq8utGE-oU

[i]https://www.mainline.com/linux-on-power-to-be-or-not-to-be-why-should-i-care/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Data Science the Fourth Pillar of the Scientific Method?

April 18, 2019

Nvidia CEO Jensen Huang revived a decade-old debate last month when he said that modern data science (AI plus HPC) has become the fourth pillar of the scientific method. While some disagree with the notion that statistic Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the General Chair of SC19 -- is an ACM Distinguished Scientist. Read more…

By HPCwire Editorial Team

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the Read more…

By HPCwire Editorial Team

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

AI and Enterprise Datacenters Boost HPC Server Revenues Past Expectations – Hyperion

April 9, 2019

Building on the big year of 2017 and spurred in part by the convergence of AI and HPC, global revenue for high performance servers jumped 15.6 percent last year Read more…

By Doug Black

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Air Force Research Laboratory Unveils First Shared, Classified DoD HPC Capability

February 28, 2019

In a ceremony on Tuesday, the Air Force Research Laboratory unveiled four new computing clusters, providing the capability for what it is calling the first-ever Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This