SCALE 2011 Winner Supercomputing as-a-Service using CometCloud
A multi-institutional team consisting of The Center for Autonomic Computing (Rutgers University), IBM T.J. Watson Research Center and Center for Subsurface Modeling (The University of Texas at Austin) was awarded the first place in the IEEE SCALE 2011 Challenge for their demonstration titled “A Scalable Ensemble-based Oil-Reservoir Simulations using Blue Gene/P-as-a-Service”. The demonstration provides supercomputing as-a-service by connecting two IBM Blue Gene/P systems in two different continents to form a large HPC Cloud using the CometCloud framework.
Emerging cloud services represent a new paradigm for computing based on an easy-to-use as-a-service abstraction, on-demand access to computing utilities, on-demand scale-up/down/out, and a usage-based payment model where users essentially “rent” virtual resources and pay for what they use. Underlying these cloud services are consolidated and virtualized data centers that provide virtual machine (VM) containers hosting applications from large numbers of distributed users. The cloud paradigm has the potential for significantly impacting price/performance behaviors and trade-offs for a wide range of applications and IT services, and as a result, there has been a proliferation of a wide range of cloud offerings spanning different levels including infrastructure-as-a-service, platform-as-a-service, software-as-a-service and applications-as-a-service.
However, existing cloud services have been largely ineffective for many HPC applications, which are becoming increasingly important in understanding the complex processes for many domains including Aerospace, Automobile, Entertainment, Finance, Manufacturing, Oil & Gas, Pharmaceuticals, etc. Reasons for this include the limited capabilities and power of the typical underlying hardware and its non-homogeneity, the lack of high-speed interconnects to support data exchanges required by many HPC applications, as well as the physical distance between machines.
While the requirements of this class of HPC application are well served by high-end supercomputing systems that provide the necessary scales and compute/communication capabilities, these systems required relatively low-level user involvement and expert knowledge, and as a result, only a few “hero” users are able to effectively use these cutting edge systems. Furthermore, these high-end resources do not typically support elasticity and dynamic scalability. Clearly, HPC applications running on these supercomputing systems could significantly benefit from the cloud abstraction, in particular from the perspectives of ease-of-use, on-demand access, elasticity and dynamic allocation of resources, as well as the integration of multiple high-end systems.
CometCloud: Federated Multi-Clouds On-Demand!
CometCloud (www.cometcloud.org) is an autonomic cloud-computing engine that enables the dynamic and on-demand federation of heterogeneous clouds, the extension of the cloud abstraction to HPC-grids and clusters, and the deployment and execution of applications on dynamically federated multi-clouds (i.e., hybrid infrastructure integrating (public & private) clouds, data-centers and enterprise Grids). A schematic overview of the CometCloud architecture is presented in Figure 1.
CometCloud provides (1) infrastructure services for synthesizing robust and secure virtual clouds through dynamic federation and coordination to enable on-demand scale-up, scale-down and scale-out, (2) programming support for enabling cloud deployments of application using popular programming models (e.g., MapReduce, Master/Worker) and application workflows, and (3) services for autonomic monitoring and management of infrastructure and applications. CometCloud is currently being used for cloud deployments of science, engineering and business application workflows.
Scalable Ensemble-based Oil-Reservoir Simulations using Blue Gene/P as-a-Service – Winner of the IEEE International SCALE 2011 Challenge
It is clear that the cloud model can alleviate some of the problems of HPC applications described above. The overarching goal of our IEEE SCALE 2011 demonstration was to illustrate this by showing how a cloud abstraction can be effectively used to provide a simple interface for current HPC resources and support real-world HPC applications. Specifically, we used CometCloud to essentially transform Blue Gene/P supercomputer systems into a federated elastic cloud, supporting dynamic provisioning and efficient utilization while maximizing ease-of-use through an as-a-service abstraction.
The overall configuration of the federated HPC-cloud used in the IEEE SCALE 2011 demonstration is illustrated in Figure 2. In this figure CometCloud was responsible for orchestrating the execution of the overall workflow. Note that the application components were used as-is without having to modify them. Deep Cloud, a reservation based system developed by IBM T.J. Watson Research Center, was responsible for the physical allocation of resources required to execute these tasks. The Blue Gene agent monitored the size of the tasks in the CometCloud task pool and communicated with Deep Cloud to obtain information about the current available resources. Using this information, the agent requested the appropriate allocation of Blue Gene/P resources and integrated them into the federated multi-cloud. Note that resources, which are no longer required, are deallocated.
The demonstration used a real-world ensemble application. Ensemble applications represent a significant class of HPC applications that require effective utilization of high-end Petascale and eventually Exascale systems. These applications explore large parameter spaces in order to simulate multi-scale and multiphase models and minimize uncertainty. Running ensemble applications require a large and dynamic pool of HPC resources and fast interconnects between the processing nodes.
The overall application scenario used in the demonstration is presented in Figure 3. The workflow consisted of multiple stages, each stage consisting of multiple, simultaneously running instances of IPARS (Implicit Parallel Accurate Reservoir Simulator), a black box, compute intensive oil-reservoir history matching application. The results of each stage were filtered through an Ensemble Kalman Filter (EnKF). Each IPARS instance (or ensemble member) required a varying number of processors and fast communication among these processors. Furthermore, the number of stages and number of ensemble members per stage were dynamic and depended on the specific problem and the desired level of accuracy. CometCloud was responsible for orchestrating the execution of the overall workflow, i.e. running the IPARS instances and integrating their results with the EnKF. Once the set of ensemble members associated with a stage have completed execution, the CometCloud workflow engine ran the EnKF step to process the results produced by these instances and generate the set of ensemble members for the next stage. The Blue Gene agent then dynamically adjusted resources (scaled up, down or out) to accommodate the new set of ensemble members. The entire process was repeated until the application objectives, i.e., the desired level of accuracy was achieved, and then all resources were released and final results returned to the user.
Figure 3: Application scenario demonstrated at IEEE SCALE 2011
The demonstration at the IEEE SCALE 2011 started by running a workflow stage with 10 initial ensemble members, where each ensemble member required between 32-128 processors. To run this, 5 partitions (32 nodes each, a total of 640 processors total) were provisioned on the IBM Blue Gene/P at Yorktown Heights, NY. The user then requested a faster time to completion, which resulted in an increase in the number of partitions provisioned to 10 (32 nodes each, a total of 1,280 processors total). This phase of the demonstration illustrated the ease of use as well as dynamic scale-up enabled using CometCloud.
In the next phase of the demonstration, the application increased the desired level of accuracy, which resulted in an increase in the number of ensemble members to 150. Maintaining the desired time to completion required a dynamic scale up in the number of resources, and the number of partitions that need to be provisioned was greater than those available at the IBM Blue Gene/P at Yorktown heights, NY (i.e., 128 partitions, 32 nodes each for a total of 16,384 processors). This resulted in CometCloud scaling out, and dynamically federating the Blue Gene/P at KAUST in Saudi Arabia. It then provisioned 22 partitions, (64 nodes each, 5,632 processors total) on this system. The ensemble members were dynamically scheduled on the federated multi-cloud composed of the two geographically distributed HPC systems, an aggregate 22,016 processors.
The project team consisted of Manish Parashar, Moustafa AbdelBaky, and Hyunjoo Kim (CAC, Rutgers Univ.), Kirk Jordan, Hani Jamjoom, Vipin Sachdeva, Zon-Yin Shae and James Sexton (IBM T.J. Watson Research Center), and Gergina Pencheva, Reza Tavakoli, and Mary F. Wheeler (CSM, UT Austin).
Moustafa AbdelBaky is a Ph.D. Student at Rutgers University. Hyunjoo Kim is a Postdoctoral Associate at Rutgers University. Manish Parashar is a Professor at Rutgers University. Kirk E. Jordan is the Emerging Solutions Executive and Associate Program Director in the Computational Science Center at IBM T.J. Watson Research Center. Hani Jamjoom is a Research Manager at IBM T.J. Watson Research Center. Vipin Sachdeva is a Researcher in the Computation Science Center at IBM T.J. Watson Research Center. Zon-Yin Shae is a Researcher at IBM T.J. Watson Research Center. James Sexton is Program Director in the Computational Science Center at IBM T.J. Watson Research Center. Gergina Pencheva is a Research Associate at the Center for Subsurface Modeling at The University of Texas at Austin. Reza Tavakoli is a Postdoctoral Fellow at the Center for Subsurface Modeling at The University of Texas at Austin. Mary F. Wheeler is Ernest and Virginia Cockrell Chair in Engineering at The University of Texas at Austin.
More information can be found at http://nsfcac.rutgers.edu/icode/scale