Sept. 27 — In the world of advanced computing, computer scientists commonly use supercomputers to explore new technologies. Without supercomputers, the field of computer science would not make progress in developing better and more efficient algorithms, methods, and tools that help advance technology. However, supercomputing environments can have restrictive systems and software in place that cannot be modified to develop new, customized environments.
In the rapidly emerging and flexible computing paradigm of cloud computing, a new system was needed to address the academic research community’s needs to develop and experiment with novel cloud architectures and pursue new applications of cloud computing in customized environments. Chameleon, launched in 2015, was designed to do just that.
In conjunction with the University of Chicago and the Texas Advanced Computing Center (TACC), the National Science Foundation (NSF) funded Chameleon, TACC’s first system focused on cloud computing for Computer Science research. This $10 million system is an experimental testbed for cloud architecture and applications, specifically for the computer science domain.
“Cloud computing infrastructure provides great flexibility in being able to dynamically reconfigure all or parts of a computing system so that it can best suit the needs of the applications and users,” said Derek Simmel, a 15-year veteran of the Advanced Systems Group at the Pittsburgh Supercomputing Center (PSC). “With this flexibility, however, comes considerable complexity in monitoring and managing the resources, and in determining how best to provision them. This is where having an experimental facility like Chameleon really helps.”
Simmel is also an XSEDE (Extreme Science and Engineering Discovery Environment) expert who works on PSC’s Bridges, an NSF-funded XSEDE resource for empowering new research communities and bringing together high performance computing (HPC) and Big Data. Bridges operates in part as a cluster but also has the ability to provide cloud resources including virtual machines (VMs) and other dynamically configurable computational resources.
According to Simmel, the new Bridges system provided new challenges because it’s a non-traditional system and involves deployment using OpenStack.
“The cloud infrastructure software itself (OpenStack) is also evolving rapidly, as computer scientists work to improve and expand its capabilities,” Simmel said. “Keeping up with new developments and changes in the way one operates all the component cloud services is a considerable burden to cloud system operators — the learning curve remains fairly steep, and all the expertise required for a traditional computing facility needs to be available for cloud-provisioned systems as well.”
The infrastructure of cloud computing is as complex as managing an entire supercomputing machine room — all the software and services required for computing, networking, scheduling, monitoring, security and software management are represented in a layer of cloud services that operate between the physical hardware and the virtual systems accessed by users.
You can find the rest of the article here.
Source: Faith Singer-Villalobos, TACC