Preparing for the Arrival of Aurora with CPU-based Interactive Visualization

By Rob Farber

October 30, 2018

In preparation for the arrival of Aurora, slated to be the first U.S. exascale supercomputer, Argonne National Laboratory is actively working to make techniques such as in situ and in transit visualization and analysis available to their user community plus the HPC community at large. The result is a DOE multi-institutional effort that includes Argonne, private companies, and other national labs to leverage SENSEI, a portable framework that enables in situ, in transit, and traditional batch visualization workflows that can use either ray tracing or triangle-based rendering back ends, for analysis and scalable interactive rendering.

In situ visualization has been identified as a key technology to enable science at the exascale[i]. In situ visualization means that the visualization occurs on the same nodes that perform the computation. In transit visualization is not as directly coupled to the simulation, and can help load-balance by using more nodes to support computationally expensive simulations like LAMMPs. Unlike in situ, in transit does incur some overhead when moving data across the communications fabric between nodes. Both methods keep the data in memory and avoid writing to storage.

Joseph Insley (visualization and analysis team lead at the Argonne Leadership Computing Facility) points out, “With SENSEI, users can utilize in situ and in transit techniques to address the widening gap between Flop/s and I/O capacity which is making full-resolution, I/O-intensive post hoc analysis prohibitively expensive, if not impossible.” Silvio Rizzi (assistant computer scientist, Argonne) highlights portability when he states, “the idea behind SENSEI is to write once and use anywhere.”

The Argonne team led by Nicola Ferrier as PI has adapted the popular LAMMPS (e.g. the Large-scale Atomic/Molecular Massively Parallel Simulator) code to demonstrate the benefits of the SENSEI framework. The integration of SENSEI made use of existing mechanisms in LAMMPS for coupling with other simulation codes.[ii]

Understanding the choice of LAMMPS as a SENSEI testbed

Paul Navrátil, director of Visualization at the Texas Advanced Computing Center (TACC), helps us understand the meaning and importance of in situ and in transit visualization to the general HPC community as well as the choice of LAMMPS by the ALCF team.

Just as Argonne will host the fastest U.S. supercomputer with Aurora, TACC will be home to Frontera, which will become the fastest academic supercomputer in the United States when it becomes operational in 2019.

Navrátil notes, “We expect in situ workflows to become increasingly necessary on Frontera and across all large-scale simulation science.” He believes that, “In transit analysis will also play an increasing role as simulations improve support for loosely-coupled in situ frameworks. With an in transit pathway, the simulation resources do not need to be shared for analysis tasks, which is favorable when analysis is compute-intensive, or when the simulation requires all available resources itself.”

LAMMPS is a compute intensive application plus it is a very popular simulation code, which makes it a natural testbed for SENSEI as it lets large numbers of users explore the benefits of in situ visualization plus the load balancing benefits of in transit visualization and analysis. SENSEI is also being used in multiple science domains, including molecular dynamics and materials science.

An in transit workflow using SENSEI and OSPRay is shown below.

Figure 1: LAMMPS using SENSEI to execute an in transit visualization and analysis work flow. (Image courtesy taken from Usher, et.al.[iii])
Choosing the right rendering back end

SENSEI is very flexible and allows researchers to perform analysis and use either OpenGL rendering or create photo real images. Jim Jeffers (senior director and senior principle engineer, Intel Visualization Solutions) notes that the interactive performance delivered by the Intel Rendering Framework and photorealistic rendering with the freely available OSPRay library and viewer, “addresses the need and creates the want” for photorealistic rendering. Succinctly, interactive ray tracing with its inherent lighting capability lets scientists get more from their data. Jeffers’ is famous for stating, “a picture is worth an exabyte.”

The ALCF team provided the following figure to illustrate what is possible when instrumenting LAMMPS with SENSEI. They used the Intel OSPRay library that is part of the Intel Rendering Framework and the libIS, a lightweight, flexible library to create this in transit visualization. However, SENSEI was designed[iv] work with other libraries in place of libIS such as catalyst (part of ParaView), ADIOS (from Oak Ridge National Laboratory), and LibSim (part of VisIt), as well as GPU-based software to perform in transit visualizations.

Figure 2: Interactive in situ visualization of a 172k atom simulation of silicene formation [6] with 128 LAMMPS ranks sending to 16 renderer ranks, all executed on Theta. (Image from Usher, et. al[v])
SENSEI is not the first code to provide easy access to both OpenGL and ray tracing back end and analytic capabilities. Both the popular VisIt[vi] and ParaView viewers make it simple to switch between or even combine triangle-based OpenGL rendering with Intel OpenSWR and photorealistic ray-traced rendering with Intel OSPRay.

Understanding Software Defined Visualization (SDVis)

The foundation of CPU-based in situ and in transit visualization is Software Defined Visualization. The core functionality are the freely available, open-source Intel OSPRay, Embree, and OpenSWR libraries. These libraries have been incorporated into the Intel® Rendering Framework stack as shown below.

Figure 3: Scientific and Professional rendering stacks using the Intel Rendering Framework (Image courtesy Intel)

Using CPUs for rendering has taken the HPC community by storm. Rizzi summarizes the interest at Argonne by noting, “We want to enable visualization on our supercomputers which are CPU-based”. Navrátil highlights TACC’s commitment by pointing out that, “CPU-based SDVis will be our primary visual analysis mode on Frontera, leveraging the Intel Rendering Framework stack.”

Scaling and the ability to run efficiently are two key ideas behind the OSPRay ray tracing and the OpenSWR OpenGL SDVis renderer.

Kitware, for example, performed trillion triangle OpenGL visualizations using the LANL Trinity supercomputer. David DeMarle, (visualization luminary and engineer at Kitware) observes that, “CPU-based OpenGL performance does not trail off even when rendering meshes containing one trillion (10^12) triangles on the Trinity leadership class supercomputer. Further, we might see a 10-20 trillion triangle per second result as our current benchmark used only 1/19th of the machine.” The ability of the CPU to access large amounts of memory is key to realizing trillion triangle per second rendering capability.

Meanwhile, OSPRay users have demonstrated the ability to render and visualize large, photorealistic images on everything from cosmological data sets to molecules and complex scenes. No special hardware is required for rendering, which can achieve interactive photorealism on as few as eight Intel Xeon Scalable 8180 processors or scale to high-quality rendering for in situ nodes. [vii] [viii] [ix] [x]

Viewing the rendered images

The “visualize anywhere” nature of CPU-based SDVis means that visualizing locally or remotely is possible on devices that can display from memory. Extraordinary display flexibility without device dependencies makes “visualize anywhere” even better. HPC users appreciate how they can view results on their laptops and switch to display walls or a cave.

SENSEI also supports existing batched save-to-storage workflows.

Summary

The HPC community has always been about pressing the limits of computation. For this reason, in situ and in transit visualization frameworks have been created to work with CPU-based rendering to eliminate data movement. In this way, visualization can scale and keep pace with simulation as the HPC community runs on petascale and anticipates the next generation exascale supercomputers.

Rob Farber is a global technology consultant and author with an extensive background in HPC and in developing machine learning technology that he applies at national labs and commercial organizations. Rob can be reached at [email protected].


[i] https://science.energy.gov/~/media/ascr/pdf/program-documents/docs/Exascale-ASCR-Analysis.pdf

[ii] https://lammps.sandia.gov/doc/Howto_couple.html

[iii] Will Usher, Silvio Rizzi, Ingo Wald, Jefferson Amstutz, Joseph Insley, Venkatram Vishwanath, Nicola Ferrier, Michael E. Papka, and Valerio Pascucci. 2018. libIS: A Lightweight Library for Flexible In Transit Visualization. In ISAV: In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization (ISAV ’18), November 12, 2018, Dallas, TX, USA. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3281464.3281466.

[v] https://doi.org/10.1145/3281464.3281466

[vi] https://tacc.github.io/visitOSPRay/

[vii] http://sdvis.org/

[viii] http://www.cgw.com/Press-Center/In-Focus/2018/Scalable-CPU-Based-SDVis-Enables-Interactive-Pho.aspx

[ix] https://www.ixpug.org/documents/1496440983IXPUG_insitu_S1_Jeffers.pdf

[x] http://www.techenablement.com/third-party-use-cases-illustrate-the-success-of-cpu-based-visualization/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire