Interconnect technology specialist Mellanox announced at ISC today a collaboration to develop a new open-source network communication framework – United Communication X Framework (UCX) – for high- performance and data-centric applications.
The effort is intended to provide platform abstractions supporting various communication technologies for “high performance compute, and data platforms” and will help pave the way towards exascale computing. UCX founding members include DOE’s Oak Ridge National Laboratory, NVIDIA, IBM, the University of Tennessee the group, and NVIDIA.
UCX organizers say the initiative will spur collaboration between industry, laboratories, and academia; create open-source production grade communication framework for data centric and HPC applications; and enable the highest performance through co-design of software-hardware interfaces. The key UCX components include:
- UC-S for Services. Basic infrastructure for component based programming, memory management, and useful system utilities. Functionality: platform abstractions and data structures.
- UC-T for Transport. Low-level API that expose basic network operations supported by underlying hardware. Functionality: work request setup and instantiation of operations.
- UC-P for Protocols. High-level API uses UCT framework to construct protocols commonly found in applications Functionality: multi-rail, device selection, pending queue, rendezvous, tag-matching, software-atomics, etc.
Co-design methodology at the heart of the effort will help advance efforts to develop a path to exascale computing. UCX alliance members hope the effort will not only provide a vehicle for production quality software, but also a low-level research infrastructure for more flexible and portable support for exascale-ready programming models. See diagram below for an overview.
“By providing our advancements in shared memory, MPI and underlying network transport technologies, we can continue to advance open standards-based networking and programming models” said Gilad Shainer. “UCX will provide optimizations for lower software overhead in communication paths that will allow cross platform near native-level interconnect performance. The framework interface will expose semantics that target not only HPC programming models, but data-centric applications as well. It will also enable vendor independent development of the library.”
UCX will serve as a high-performance, low latency communication layer and according to UCX organizers, assist in helping to provide applications developers with productive, extreme-scale programming languages and libraries, including Partitioned Global Address Space (PGAS) APIs, such as Fortran Coarrays and OpenSHMEM, as well as OpenMP across multiple memory domains and on heterogeneous nodes.
What UCX is not, insists the group, is a device driver; it is instead a close-to-hardware API layer, providing an access to hardware’s capabilities, and will rely on drivers supplied by vendors. IBM indicated its intention to incorporate UCX work in its CORAL projects for DOE suggesting that among other things, the new effort is also an attempt to further extend the OpenPOWER ecosystem.
“UCX is clearly a strategic open-source communication framework for future high- performance systems,” said Jim Sexton, IBM Fellow and Director of Data Centric Systems, in official press release accompanying the announcement. “We are eager to collaborate on UCX with our key OpenPOWER and university partners. In particular, IBM is contributing key innovations from our PAMI high-performance messaging software already in use in several Top10 supercomputing systems.”
For NVIDIA, UCX could ease efforts to incorporate accelerators generally. “UCX is intended to make it faster and easier to add Tesla Accelerated Computing Platform technologies, including GPUDirect RDMA and the NVLink high-speed interconnect, to the HPC communications stack,” said Duncan Poole, director of Platform Alliances at NVIDIA. “We look forward to working with the UCX members to bring new levels of high performance computing solutions to HPC.” UCX would make it easier in general for system makers to incorporate NVIDIA GPUs.
The effort has been in the works for about a year according to Pavel Shamis, software developer at ORNL and a spokesman at the press conference. He noted ORNL work on OpenSHMEM, the communication library (C and Fortran PGAS programming model, point-to-point and collective routines, synchronizations, and atomic operations) prompted ORNL considerations on how best to expand use of OpenSHMEM and create industry support.
Traditionally there have been three popular mainstream communication frameworks to support various interconnect technologies and programming languages: MXM, developed by Mellanox Technologies; PAMI, developed by IBM; and UCCS, developed by ORNL, the University of Houston, and the University of Tennessee. UCX will unify the strengths and capabilities of each of these communication libraries and optimize them into one unified communication framework that delivers essential building blocks for the development of a high-performance communication ecosystem.
The UCX collaboration will be guided by a High-Performance Computing Leadership Team that includes: Dr. Arthur Bernard Maccabe, Division Director, Computer Science and Mathematics Division, Oak Ridge National Laboratory; Donald Becker, Tesla System Architect, NVIDIA; Dr. George Bosilca, Research Director at the Innovative Computing Laboratory, University of Tennessee; Richard Graham, Senior Solutions Architect, Mellanox Technologies; Dr. Sameer Kumar, Research Scientist, Deep Computing and High Performance Computing systems, IBM India Research Lab; Stephen Poole , CTO, Open Software System Solutions; Shainer from Mellanox Technologies; and Dr. Sameh Sharkawi, Team Lead, Parallel Environment MPI Middleware at IBM.
The UCX project at ORNL is funded by the United States Department of Defense and uses resources of the Extreme Scale Systems Center located at ORNL. This project is being developed using resources of the Oak Ridge Leadership Computing Facility at ORNL, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.