Chameleon’s HPC Testbed Sharpens Its Edge, Presses ‘Replay’

By Oliver Peckham

July 22, 2021

“One way of saying what I do for a living is to say that I develop scientific instruments,” said Kate Keahey, a senior fellow at the University of Chicago and a computer scientist at Argonne National Laboratory, as she opened her session at Supercomputing Frontiers Europe 2021 this week. Keahey was there to talk about one tool in particular: Chameleon, a testbed for computer science research run by the University of Chicago, the Texas Advanced Computing Center (TACC), UNC-Chapel Hill’s Renaissance Computing Institute (RENCI) and Northwestern University.

Computational camouflage

The name, Keahey explained, was no accident. “We developed an environment whose main property is the ability to change, and the way it changes is it adapts itself to your experimental requirements,” she said. “So in other words, you can reconfigure the resources on this environment completely, at a bare metal level. You can allocate bare metal nodes which you then, later on, reconfigure. You can boot them from a custom kernel, you can turn them on, turn them off, you can access the serial console. So this is a good platform for developing operating systems, virtualization solutions and so forth.”

This flexibility is backed by similarly scalable and diverse hardware, spread across two sites: one at the University of Chicago, one at TACC. Having begun as ten racks of Intel Haswell-based nodes and 3.5 petabytes of storage, Chameleon is now powered by over 15,000 cores (including Skylake and Cascade Lake nodes) and six petabytes of storage, encompassing larger homogeneous partitions as well as an array of different architectures, accelerators, networking hardware and much, much more.

Chameleon’s current hardware. Image courtesy of Kate Keahey.

Chameleon, which is built on the open-source cloud computing platform OpenStack, has been available to its users since 2015 and has had its resources extended through 2024. It supports over 5,500 users, 700 projects and 100 institutions, and its users have used it to produce more than 300 publications. Keahey highlighted research uses ranging from modeling of intrusion attacks to virtualization-containerization comparisons, all made possible thanks to Chameleon’s accessible and diverse hardware and software testbed.

So: what’s new, and what’s next?

Sharpening Chameleon’s edge

To answer that question, Keahey turned to another use case: federated learning research by Zheng Chai and Yue Cheng from George Mason University. Those researchers, Keahey explained, had been using Chameleon for research involving edge devices – but since there were no edge devices on Chameleon, they were emulating the edge devices rather than experimenting directly on the edge devices.

“That made us realize that what we needed to do was extend our cloud testbed to the edge,” Keahey said.

There was, of course, disagreement over what a true “edge testbed” would look like: some, Keahey explained, thought it should look a lot like a cloud system separated via containers; others thought it should look nothing like a cloud system at all, and that location and the ensuing limitations of location (such as access, networking and power management) were paramount to a genuine edge testbed experience.

In the end, the Chameleon team developed CHI@Edge (with “CHI” ostensibly standing in for “Chameleon infrastructure,” rather than Chicago), aiming to incorporate the best of both worlds. CHI@Edge applies a mixed-ownership model, wherein the Chameleon infrastructure loops in a variety of in-house edge devices, but users are also able to add their own edge devices to the testbed via an SDK and access those devices via a virtual site. Those devices can even be shared (though privacy is the default). Other than that, the end result – for now – has much in common with Chameleon’s prior offerings: both have advanced reservations; both have single-tenant isolation; both have isolated networking and public IP capabilities.

Image courtesy of Kate Keahey.

“We’re going from running in a datacenter, where everything is secured, to running in a wide area – to running on devices that people have constructed on their kitchen tables and that are also connected to various IoT devices,” Keahey said. This, she explained, brought with it familiar challenges: access, security, resource management and, in general, the attending complications of any shared computational resource. But there were also unfamiliar challenges, such as incorporating remote locations beyond Chameleon’s two major sites, coping with power and networking constraints and meaningfully integrating peripheral devices. The researchers adapted OpenStack, which already supported containerization, to meet these challenges.

Pressing “replay” on experiments

As Chameleon moves into the future – and as both cloud computing and heterogeneity become status quo for HPC – Keahey is also looking at exploiting Chameleon’s advantages to offer services out of reach of most on-premises computing research.

“Can we make the digital representation of user experiments shareable?” Keahey asked. “And can we make them as shareable as papers are today?” She explained that she was able to read papers describing experiments, but that rerunning the experiments themselves was out of reach. This, she said, limited researchers’ ability not only to reproduce those experiments, but also to tinker with important hardware and software variables that might affect the outcomes.

If you’re working in a lab with local systems, making experiments shareable is a tall order – but for a public testbed like Chameleon, Keahey noted, the barrier to entry was much lower: users seeking to reproduce an experiment could access the same hardware as the researcher – or even the same specific node – if the experiment was run on Chameleon. And Chameleon, she said, had access to fine-grained hardware version logs accompanied by hundreds of thousands of system images and tens of thousands of orchestration templates.

So the team made it happen, developing Trovi, an experiment portal for Chameleon that allows users to create a packaged experiment out of any directory of files on a Jupyter server. Trovi, which Keahey said “functions a little bit like a Google Drive for experiments,” supports sharing, and any user with a Chameleon allocation can effectively “replay” the packaged experiments. Keahey explained that the team was even working on ways to uniformly reference these experiment packages – which would allow users to embed links to experiments in their papers – and that some of this functionality was in the works for SC21 in a few months.

By the end, Keahey had painted a picture of Chameleon as a tool living up to its name by adapting to a rapidly shifting scientific HPC landscape. “Building scientific instruments is difficult because they have to change with the science itself, right?” she said.

As if in response, the slide showed Chameleon’s motto: “We’re here to change. Come and change with us!”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable quantum memory framework. “This work provides a promising Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire