NSF Announces Two $10M Projects to Create Cloud Computing Testbeds

August 26, 2014

Aug. 26 — The National Science Foundation (NSF) has announced two $10 million projects to create cloud computing testbeds–to be called “Chameleon” and “CloudLab“–that will enable the academic research community to develop and experiment with novel cloud architectures and pursue new, architecturally-enabled applications of cloud computing.

Cloud computing refers to the practice of using a network of remote servers to store, manage and process data, rather than a local server or a personal computer. In recent years, cloud computing has become the dominant method of providing computing infrastructure for Internet services.

While most of the original concepts for cloud computing came from the academic research community, as clouds grew in popularity, industry drove much of the design of their architecture. Today’s awards complement industry’s efforts and will enable academic researchers to experiment and advance cloud computing architectures that can support a new generation of innovative applications, including real-time and safety-critical applications like those used in medical devices, power grids, and transportation systems.

These new projects, part of the NSF CISE Research Infrastructure: Mid-Scale Infrastructure – NSFCloud program, continue the agency’s legacy of supporting cutting-edge networking research infrastructure.

“Just as NSFNet laid some of the foundations for the current Internet, we expect that the NSFCloud program will revolutionize the science and engineering for cloud computing,” said Suzi Iacono, acting head of NSF’s Directorate for Computer and Information Science and Engineering (CISE). “We are proud to announce support for these two new projects, which build upon existing NSF investments in the Global Environment for Network Innovations (GENI) testbed and promise to provide unique and compelling research opportunities that would otherwise not be available to the academic community.”

Chameleon

The first of the NSFCloud projects will support the design, deployment and initial operation of “Chameleon,” a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago and The University of Texas at Austin.

Chameleon will consist of 650 cloud nodes with 5 petabytes of storage. Researchers will be able to configure slices of Chameleon as custom clouds using pre-defined or custom software to test the efficiency and usability of different cloud architectures on a range of problems, from machine learning and adaptive operating systems to climate simulations and flood prediction.

The testbed will allow “bare-metal access”–an alternative to the virtualization technologies currently used to share cloud hardware, allowing for experimentation with new virtualization technologies that could improve reliability, security and performance.

“Like its namesake, the Chameleon testbed will be able to adapt itself to a wide range of experimental needs, from bare metal reconfiguration to support for ready made clouds,” said Kate Keahey, a scientist at the Computation Institute at the University of Chicago and principal investigator for Chameleon. “Furthermore, users will be able to run those experiments on a large scale, critical for big data and big compute research. But we also want to go beyond the facility and create a community where researchers will be able to discuss new ideas, share solutions that others can build on or contribute traces and workloads representative of real life cloud usage.”

One aspect that makes Chameleon unique is its support for heterogeneous computer architectures, including low-power processors, general processing units (GPUs) and field-programmable gate arrays (FPGAs), as well as a variety of network interconnects and storage devices. Researchers can mix-and-match hardware, software and networking components and test their performance. This flexibility is expected to benefit many scientific communities, including the growing field of cyber-physical systems, which integrates computation into physical infrastructure. The research team plans to add new capabilities in response to community demand or when innovative new products are released.

Other partners on the Chameleon project (and their primary area of expertise) are: The Ohio State University (high performance interconnects), Northwestern University (networking) and the University of Texas at San Antonio (outreach).

CloudLab

The second NSFCloud project supports the development of “CloudLab,” a large-scale distributed infrastructure based at the University of Utah, Clemson University and the University of Wisconsin, on top of which researchers will be able to construct many different types of clouds. Each site will have unique hardware, architecture and storage features, and will connect to the others via 100 gigabit-per-second connections on Internet2’s advanced platform, supporting OpenFlow (an open standard that enables researchers to run experimental protocols in campus networks) and other software-defined networking technologies.

“Today’s clouds are designed with a specific set of technologies ‘baked in’, meaning some kinds of applications work well in the cloud, and some don’t,” said Robert Ricci, a research assistant professor of computer science at the University of Utah and principal investigator of CloudLab. “CloudLab will be a facility where researchers can build their own clouds and experiment with new ideas with complete control, visibility and scientific fidelity. CloudLab will help researchers develop clouds that enable new applications with direct benefit to the public in areas of national priority such as real-time disaster response or the security of private data like medical records.”

In total, CloudLab will provide approximately 15,000 processing cores and in excess of 1 petabyte of storage at its three data centers. Each center will comprise different hardware, facilitating additional experimentation. In that capacity, the team is partnering with three vendors: HP, Cisco and Dell to provide diverse, cutting-edge platforms for research. Like Chameleon, CloudLab will feature bare-metal access. Over its lifetime, CloudLab is expected to run dozens of virtual experiments simultaneously and to support thousands of researchers.

Other partners on CloudLab include Raytheon BBN Technologies, the University of Massachusetts Amherst and US Ignite, Inc.

Each team is led by researchers with extensive experience deploying experimental cloud computing systems. Ricci and the CloudLab team have successfully operated Emulab since 2000, providing a network testbed where researchers can develop, debug and evaluate their systems in a wide range of environments. The Chameleon team includes several members of FutureGrid, an NSF-supported testbed that lets researchers experiment in the use and security of grids and clouds.

Ultimately, the goal of the NSFCloud program and the two new projects is to advance the field of cloud computing broadly. The awards announced today are the first step in meeting this goal. They will develop new concepts, methods and technologies to enable infrastructure design and ramp-up activities and will demonstrate the readiness for full-fledged execution. In the second phase of the program, each cloud resource will become fully staffed and operational, fulfilling the proposed mission of serving as a testbed that is used extensively by the research community.

Source: National Science Foundation

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

China’s Expanding Effort to Win in Microchips

July 27, 2017

The global battle for preeminence, or at least national independence, in semiconductor technology and manufacturing continues to heat up with Europe, China, Japan, and the U.S. all vying for sway. A fascinating article ( Read more…

By John Russell

Hyperion: Storage to Lead HPC Growth in 2016-2021

July 27, 2017

Global HPC external storage revenues will grow 7.8% over the 2016-2021 timeframe according to an updated forecast released by Hyperion Research this week. HPC server sales, by comparison, will grow a modest 5.8% to $14.8 Read more…

By John Russell

Exascale FY18 Budget – The Senate Provides Their Input

July 27, 2017

In the federal budgeting world, “regular order” is a meaningful term that is fondly remembered by members of both the Congress and the Executive Branch. Regular order is the established process whereby an Administrat Read more…

By Alex R. Larzelere

HPE Extreme Performance Solutions

HPE Servers Deliver High Performance Remote Visualization

Whether generating seismic simulations, locating new productive oil reservoirs, or constructing complex models of the earth’s subsurface, energy, oil, and gas (EO&G) is a highly data-driven industry. Read more…

India Plots Three-Phase Indigenous Supercomputing Strategy

July 26, 2017

Additional details on India's plans to stand up an indigenous supercomputer came to light earlier this week. As reported in the Indian press, the Rs 4,500-crore (~$675 million) supercomputing project, approved by the Ind Read more…

By Tiffany Trader

Exascale FY18 Budget – The Senate Provides Their Input

July 27, 2017

In the federal budgeting world, “regular order” is a meaningful term that is fondly remembered by members of both the Congress and the Executive Branch. Reg Read more…

By Alex R. Larzelere

India Plots Three-Phase Indigenous Supercomputing Strategy

July 26, 2017

Additional details on India's plans to stand up an indigenous supercomputer came to light earlier this week. As reported in the Indian press, the Rs 4,500-crore Read more…

By Tiffany Trader

Tuning InfiniBand Interconnects Using Congestion Control

July 26, 2017

InfiniBand is among the most common and well-known cluster interconnect technologies. However, the complexities of an InfiniBand (IB) network can frustrate the Read more…

By Adam Dorsey

NSF Project Sets Up First Machine Learning Cyberinfrastructure – CHASE-CI

July 25, 2017

Earlier this month, the National Science Foundation issued a $1 million grant to Larry Smarr, director of Calit2, and a group of his colleagues to create a comm Read more…

By John Russell

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Fujitsu Continues HPC, AI Push

July 19, 2017

Summer is well under way, but the so-called summertime slowdown, linked with hot temperatures and longer vacations, does not seem to have impacted Fujitsu's out Read more…

By Tiffany Trader

Researchers Use DNA to Store and Retrieve Digital Movie

July 18, 2017

From abacus to pencil and paper to semiconductor chips, the technology of computing has always been an ever-changing target. The human brain is probably the com Read more…

By John Russell

The Exascale FY18 Budget – The Next Step

July 17, 2017

On July 12, 2017, the U.S. federal budget for its Exascale Computing Initiative (ECI) took its next step forward. On that day, the full Appropriations Committee Read more…

By Alex R. Larzelere

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This