The oneAPI Center will be headed out of the University of Utah’s Center for Extreme Data Management Analysis and Visualization (CEDMAV) and will involve the cooperation of Lawrence Livermore National Laboratory’s Center for Applied Scientific Computing (CASC). It will accelerate ZFP compression software using oneAPI’s open, standards-based programming on multiple architectures to advance exascale computing.
Participants said the center’s efforts extend long-standing collaborations of the organizations dedicated to developing advanced data formats and layouts for efficient storage and providing access to large-scale scientific data for high performance computing (HPC) architectures.
“The University of Utah’s CEDMAV, in collaboration with LLNL’s CASC, have been pioneering research in managing extreme data applications involving scientific simulations and experimental facilities,” said Manish Prashar, director of the Scientific Computing and Imaging Institute at the University of Utah. “This collaboration has a long track record of developing and deploying open-source scientific software that finds broad adoption in the communities of interest. This oneAPI Center of Excellence will strengthen this collaboration and help this academic research find practical adoption on multiarchitecture systems.”
Developed by LLNL, ZFP is state-of-the-art software for lossless and error-controlled lossy compression of floating-point data that is becoming a de facto standard in the HPC community, with numerous science and engineering applications and users. ZFP (de)compression is particularly amenable to data-parallel execution through its decomposition into small, independent data blocks, and parallel backends have been developed for OpenMP, CUDA, and HIP programming models, according to LLNL computer scientist Peter Lindstrom.
“As lead of ZFP development, I am excited about this opportunity with our long-standing collaborators at the University of Utah to extend the capabilities of our ZFP compressor to run efficiently on next-generation supercomputers, including Argonne National Laboratory’s Aurora system, one of the world’s first exascale systems,” Lindstrom said. “The resulting compression software will allow large-scale scientific computing applications, among others, to effectively boost memory capacity and bandwidth while significantly reducing communication and I/O time and offline storage.”
With the ZFP development team at LLNL, the oneAPI Center of Excellence will develop a SYCL-based portable, scalable and performant ZFP backend that runs on accelerator architectures across different vendors, including Intel data center GPUs. As one of the software technologies selected by the Department of Energy’s (DOE) Exascale Computing Project (ECP), ZFP is adopted by massively parallel simulations and technologies running on some of the world’s largest supercomputers, which will benefit several high-visibility scientific applications. Moreover, ZFP’s widespread adoption in industry and academia will help advance many large-scale data management technologies, including HDF5, ADIOS, OpenZGY, OpenVisus and Zarr.
The development of a high-performance SYCL port of ZFP on accelerator architectures supporting multiple vendors will benefit several high-visibility supercomputing applications and better showcase the power of an open, standards-based software ecosystem.
“University of Utah and Lawrence Livermore National Laboratory’s work developing a highly performant SYCL-based ZFP library aids the availability of large-scale scientific data for high performance computing architectures, enabling exascale applications to target multiple accelerator architectures,” said Scott Apeland, senior director of Intel Developer Ecosystem Programs. “This latest Center of Excellence will showcase how open, standards-based oneAPI development benefits the developer community.”
CEDMAV’s research approach stems from a systematic assessment of HPC application needs and how they lead to new investigation and innovation, followed by practical validation and deployment to broader communities. CEDMAV’s previous collaborations with LLNL include shared research projects, staff with dual appointments, student interns and postdocs.
“It is an honor for CEDMAV to establish this oneAPI Center of Excellence in collaboration with LLNL. This will give a great opportunity to solidify our collaboration and expand it with the support and collaboration of Intel engineers,” said CEDMAV founding director and former CASC Data Analysis group leader at LLNL Valerio Pascucci. “It is exciting to see the emergence of the oneAPI programming model that we plan to fully embrace in this project. In particular, the SYCL cross-platform abstraction will tremendously increase the productivity of our teams in creating performant codes that run efficiently on modern, heterogeneous architectures. Diverse hardware-software architectures are becoming ubiquitous in high-performance systems, and the oneAPI technology will increase tremendously the impact of ZFP in a broad spectrum of applications.”
The University of Utah’s CEDMAV is internationally recognized for its activities involving theoretical and algorithmic research, systems development and tool deployment for dealing with extreme data. This research lies at the intersection of scientific visualization, big data management, HPC and data analytics.
The Center for Applied Scientific Computing serves as LLNL’s window to the broader computer science, computational physics, applied mathematics and data science research communities. With academic, industrial and other government laboratory partners, it conducts world-class scientific research and development on problems critical to national security.
oneAPI is open, unified, cross-architecture programming model for CPUs and accelerator architectures (GPUs, FPGAs and others). Based on standards, the programming model simplifies software development and delivers uncompromised performance for accelerated compute without proprietary lock-in, while enabling the integration of existing code. With oneAPI, developers can choose the best architecture for the specific problem they are trying to solve without needing to rewrite software for the next architecture and platform.
Source: Jeremy Thomas, LLNL