Cloud Computing Testbed Chameleon Renewed for Second Phase

September 5, 2017

Sept. 5 — Cloud computing lies behind many of today’s most popular technologies, from streaming video and music to e-mail and chat services to storing and sharing family photos. Since 2015, the Chameleon testbed has helped researchers push the potential of cloud computing even further, finding novel scientific applications and improving security and privacy.

A new grant from the National Science Foundation will extend Chameleon’s mission for another three years, allowing the project led by University of Chicago with partners at the Texas Advanced Computing Center (TACC), Renaissance Computing Institute (RENCI), and Northwestern University to enter its next phase of cloud computing innovation. Upgrades to hardware and services as well as new features will help scientists rigorously test new cloud computing platforms and networking protocols.

The $10 million renewal will be officially announced at the inaugural Chameleon User Meeting, taking place September 13-14 at Argonne National Laboratory.

“In phase one we built a testbed, but in phase two we’re going to transform this testbed into a scientific instrument,” said Kate Keahey, Argonne computer scientist, Computation Institute fellow, and Chameleon project PI. “We’re going to extend the capabilities that allow users to keep a record of their experiments in Chameleon and provide new services that allow them to build more repeatable experiments.”

The new features build upon the project’s original philosophies of flexibility and transparency, which provided users with a large-scale, ~600-node cloud infrastructure with bare metal reconfiguration privileges. This unique level of access allows researchers to go beyond limited development on existing commercial or scientific clouds, offering a customizable platform to create and test entirely new cloud computing architectures.

In its first phase, this powerful resource supported advanced work in computer science areas such as cybersecurity, OS design and power management. With Chameleon, scientists could realistically simulate cyberattacks upon cloud computing systems to improve their defenses, train students to search high-resolution telescope images for undiscovered exoplanets, and develop machine learning algorithms that automatically determine the most energy-efficient task assignment schemes for large data centers.

Many of these projects benefited from Chameleon features that allow them to extract detailed, precise data about system performance and status during usage. To further support the conduct of reproducible science, the Chameleon team will make it even easier for scientists to gather and use this information.

“Everything in the testbed is a recorded event, but right now the information about those events is in various different places,” Keahey said. “We’re going to make it very easy for users to have a record of everything that was happening on the testbed resources that they used, and we’ll also provide services to replay those experiments.”

Additional phase two landmarks include new hardware, including additional racks at UChicago and TACC, infusion of highly-contested resources such as GPUs, and Corsa network switches. The new Corsa switches enable experimentation with software-defined networking (SDN) within a Chameleon site as well as extending individual SDN experiments across the wide-area to include resources from either Chameleon site or even from other compatible testbeds, such as NSF GENI.

New hardware will be complemented by new capabilities allowing users to define entirely new classes of experiments. For the second phase, new team members from RENCI with significant expertise developing such capabilities will join the existing Chameleon team based at UChicago, TACC, and Northwestern University.

On the software side, the Chameleon team will package CHI (CHameleon Infrastructure), the software operating Chameleon, based primarily on the open-source OpenStack project to which the University of Chicago team made substantial contributions. Packaging the Chameleon operational model will allow others to create their own experimental clouds easily.

“Whether somebody wants to provide a Chameleon resource or create their own experimental testbed, CHI will make it very easy for them” Keahey said. “It is based on a widely used open source system that is increasingly popular in scientific data centers and thus easy to adopt. Ultimately, we would like to make testbeds for Computer Science research cost-effective to operate”.

The project will also look to expand their community through outreach events, including workshops, online tutorials, and September’s user meeting. In addition to training scientists in the use of Chameleon and gathering feedback for future improvements, these events will also be an opportunity for defining the future of cloud computing science, Keahey said.

“We would like to go beyond simply providing resources, and give the community the opportunity to focus on experimental methodology in computer science: how to improve it, how to control it, and how to make experimental computer science less about logistics and more about the science.”


Source: Computation Institute & The University of Chicago

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak) supercomputer that will be used to advance early-stage R&a Read more…

By Tiffany Trader

Training Time Slashed for Deep Learning

August 14, 2018

Fast.ai, an organization offering free courses on deep learning, claimed a new speed record for training a popular image database using Nvidia GPUs running on public cloud infrastructure. A pair of researchers trained Read more…

By George Leopold

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learning. The CERN team demonstrated that AI-based models have the Read more…

By Rob Farber

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Super Problem Solving

You might think that tackling the world’s toughest problems is a job only for superheroes, but at special places such as the Oak Ridge National Laboratory, supercomputers are the real heroes. Read more…

Rigetti Eyes Scaling with 128-Qubit Architecture

August 10, 2018

Rigetti Computing plans to build a 128-qubit quantum computer based on an equivalent quantum processor that leverages emerging hybrid computing algorithms used to test programs and potential applications. Founded in 2 Read more…

By George Leopold

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

SLATE Update: Making Math Libraries Exascale-ready

August 9, 2018

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remain Read more…

By John Russell

Summertime in Washington: Some Unexpected Advanced Computing News

August 8, 2018

Summertime in Washington DC is known for its heat and humidity. That is why most people get away to either the mountains or the seashore and things slow down. H Read more…

By Alex R. Larzelere

NSF Invests $15 Million in Quantum STAQ

August 7, 2018

Quantum computing development is in full ascent as global backers aim to transcend the limitations of classical computing by leveraging the magical-seeming prop Read more…

By Tiffany Trader

By the Numbers: Cray Would Like Exascale to Be the Icing on the Cake

August 1, 2018

On its earnings call held for investors yesterday, Cray gave an accounting for its latest quarterly financials, offered future guidance and provided an update o Read more…

By Tiffany Trader

Google is First Partner in NIH’s STRIDES Effort to Speed Discovery in the Cloud

July 31, 2018

The National Institutes of Health, with the help of Google, last week launched STRIDES - Science and Technology Research Infrastructure for Discovery, Experimen Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This