Mellanox, ORNL to Deliver UCX Progress Report at SC15

By John Russell

November 16, 2015

At ISC2015 Mellanox introduced a new open-source network communication framework – United Communication X Framework (UCX) – for high-performance and data-centric applications.

At the time Gilad Shainer of Mellanox said, “By providing our advancements in shared memory, MPI and underlying network transport technologies, we can continue to advance open standards-based networking and programming models. UCX will provide optimizations for lower software overhead in communication paths that will allow cross platform near native-level interconnect performance. The framework interface will expose semantics that target not only HPC programming models, but data-centric applications as well. It will also enable vendor independent development of the library.”

These are big goals. Promoting co-design methodology is at the heart of the effort. UCX alliance members hope the effort will not only provide a vehicle for production quality software, but also a low-level research infrastructure for more flexible and portable support for exascale-ready programming models. Other UCX founding members were present at the launch and included DOE’s Oak Ridge National Laboratory, NVIDIA, IBM, the University of Tennessee the group, and NVIDIA.

The UCX key UCX components include:

  • UC-S for Services. Basic infrastructure for component based programming, memory management, and useful system utilities. Functionality: platform abstractions and data structures.
  • UC-T for Transport. Low-level API that expose basic network operations supported by underlying hardware. Functionality: work request setup and instantiation of operations.
  • UC-P for Protocols. High-level API uses UCT framework to construct protocols commonly found in applications Functionality: multi-rail, device selection, pending queue, rendezvous, tag-matching, software-atomics, etc.

Shainer insists UCX is the framework for future systems. At SC15 this week, he and Pavel Shamis (ORNL) will provide UCX update at a BOF on Tuesday. As a prelude, HPCwire asked Shainer to review the purpose of UCX and its early activities and progress. Here is that interview.

HPCwire: I was at the ISC introduction of UCX and several of the gathered attendees were impressed by the founding members by confused about the goals and the problem it was attacking. Perhaps it would be worthwhile to review how UCX originated and what it seeks to accomplish.

Gilad1Shainer: Today there are multiple HPC libraries (MPI, SHMEM, PGAS languages) and emerging HPC programming models (libraries and languages) that face a substantial challenge because they require a re-implementation (or maintenance) of complex network codes within the code-bases.

This often leads to code duplication and long term maintenance issues. As a side effect these duplicated efforts frequently result in performance issues because developers don’t have the time, or the vendor-level expertise in some cases, required to optimize network. In addition, emerging hardware technologies are now focusing primarily on a limited range of HPC libraries or programming models (mostly MPI) due to time and resource constraints. At the end of the day, resources are desperately needed to optimize both software and hardware with emerging HPC programming models.

By providing a unified, standardized, performance-portable and hardware agnostic interface these issues are resolved. HPC libraries and programming models can now target a single API that enables optimal execution of the libraries on a broad variety of hardware architectures. At the same time, hardware vendors can focus on the development of a single layer, which enables functionality across multiple programming models.

Another important aspect of the challenge is the exascale programming environment has yet to be defined and it is a topic of ongoing HPC research. In order to address this challenge UCX was designed as a framework – a collection of building blocks that enables fast and flexible access to various utilities and communication directives. This approach provides fine grain flexibility that allows HPC researches to customize and adjust UCX for their unique and specific needs. This is exactly the part where the co-design component of the effort is critical. Through the UCX framework researchers can (and already do) influence hardware architecture through the offloading of some of the capabilities onto the hardware. Simultaneously researchers are able to learn about new features and capabilities of the hardware, enabling those for exascale programming models.

This is truly the kind of project where researchers and industry work together on co-design and transition from the bleeding edge of research to the production environment.

UCX diagram 2015-07-13HPCwire: You’ve described ambitious goals. How do you practically make that happen? Who are the primary supporters UCX has and needs and what are the key technical hurdles confronting progress?

Shainer: UCX is truly open source and community driven effort. The base code of UCX was contributed by industry, academic and government labs, and today the organizations involved are Oak Ridge National laboratory, Mellanox, IBM, NVIDIA, Lawrence Livermore national laboratory, Argonne National laboratory, University of Tennessee, Houston University and Pathscale.

In terms of technical hurdles confronting progress, UCX is vastly different from other frameworks. The base of UCX required years of development from its members, and now the effort is being unified. Users, labs, academic institutes and commercial vendors are all working together to create synergies between the software and the hardware. The intention is to deliver the most advanced high-performance software framework that will be used on standard solutions, such as InfiniBand and Ethernet, as well on custom made products.

HPCwire: Does UCX effort to fit into OpenPOWER?

Shainer: No, UCX is not part of OpenPOWER. UCX supports any compute platform including Power, GPUs and x86. We believe many future systems, including the CORAL system (but not limited to CORAL), will use UCX as the software framework between the infrastructure and the applications.

HPCwire: What are some of the milestones you’ll look for to that indicate UCX is gaining traction in the HPC community?

Shainer: The number of contributors and developers on UCX continues to grow and we are seeing more and more organizations looking to incorporate UCX into their HPC platforms. UCX has already been integrated with upstream of Open MPI project and OpenSHMEM. Upcoming version of OpenMPI 2.0 will have full support for UCX. Also, the upcoming year will reveal more software solutions using HPC, including the highly popular MPICH MPI developed by researches from Argonne National Laboratories, as well as increased support for emerging exascale runtimes like ParSec developed by University of Knoxville, Tennessee. Clearly there is a need for a co-designed, open and eco-system driven framework out there, and UCX is filling this need.

HPCwire: The original announcement noted UCX will incorporate elements of MXM (Mellanox), UCCS (ORNL), and PAMI (IBM) technology. This seems like a powerful combination. How will this work be done and what are the strengths of each the UCX wishes to capture?

Shainer: Yes, this is a powerful combination. With UCX we consolidate decades of experience in development of HPC software by variety of industry and academy organization into an open source framework. It unites elements that were designed for the fastest networks, large infrastructures, accelerators, MPI, SHMEM/PGAS and UPC. We have successfully created synergies between the software and the hardware, and it is full open source. We’ll do several demonstrations at SC15, and will host a BoF session on Tuesday, November 17, 3:30PM – 5:00PM, Room 15. All are invited to join us there and learn more on UCX, its mission, the current development status and our future plans.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire