Blue Gene Sniffs for Black Gold in the Cloud

By Manish Parashar

June 27, 2011

SCALE 2011 Winner Supercomputing as-a-Service using CometCloud

A multi-institutional team consisting of The Center for Autonomic Computing (Rutgers University), IBM T.J. Watson Research Center and Center for Subsurface Modeling (The University of Texas at Austin) was awarded the first place in the IEEE SCALE 2011 Challenge for their demonstration titled “A Scalable Ensemble-based Oil-Reservoir Simulations using Blue Gene/P-as-a-Service”. The demonstration provides supercomputing as-a-service by connecting two IBM Blue Gene/P systems in two different continents to form a large HPC Cloud using the CometCloud framework.

Emerging cloud services represent a new paradigm for computing based on an easy-to-use as-a-service abstraction, on-demand access to computing utilities, on-demand scale-up/down/out, and a usage-based payment model where users essentially “rent” virtual resources and pay for what they use. Underlying these cloud services are consolidated and virtualized data centers that provide virtual machine (VM) containers hosting applications from large numbers of distributed users. The cloud paradigm has the potential for significantly impacting price/performance behaviors and trade-offs for a wide range of applications and IT services, and as a result, there has been a proliferation of a wide range of cloud offerings spanning different levels including infrastructure-as-a-service, platform-as-a-service, software-as-a-service and applications-as-a-service.

However, existing cloud services have been largely ineffective for many HPC applications, which are becoming increasingly important in understanding the complex processes for many domains including Aerospace, Automobile, Entertainment, Finance, Manufacturing, Oil & Gas, Pharmaceuticals, etc. Reasons for this include the limited capabilities and power of the typical underlying hardware and its non-homogeneity, the lack of high-speed interconnects to support data exchanges required by many HPC applications, as well as the physical distance between machines.

While the requirements of this class of HPC application are well served by high-end supercomputing systems that provide the necessary scales and compute/communication capabilities, these systems required relatively low-level user involvement and expert knowledge, and as a result, only a few “hero” users are able to effectively use these cutting edge systems. Furthermore, these high-end resources do not typically support elasticity and dynamic scalability. Clearly, HPC applications running on these supercomputing systems could significantly benefit from the cloud abstraction, in particular from the perspectives of ease-of-use, on-demand access, elasticity and dynamic allocation of resources, as well as the integration of multiple high-end systems.

CometCloud: Federated Multi-Clouds On-Demand!

CometCloud (www.cometcloud.org) is an autonomic cloud-computing engine that enables the dynamic and on-demand federation of heterogeneous clouds, the extension of the cloud abstraction to HPC-grids and clusters, and the deployment and execution of applications on dynamically federated multi-clouds (i.e., hybrid infrastructure integrating (public & private) clouds, data-centers and enterprise Grids). A schematic overview of the CometCloud architecture is presented in Figure 1.

CometCloud provides (1) infrastructure services for synthesizing robust and secure virtual clouds through dynamic federation and coordination to enable on-demand scale-up, scale-down and scale-out, (2) programming support for enabling cloud deployments of application using popular programming models (e.g., MapReduce, Master/Worker) and application workflows, and (3) services for autonomic monitoring and management of infrastructure and applications. CometCloud is currently being used for cloud deployments of science, engineering and business application workflows.

Scalable Ensemble-based Oil-Reservoir Simulations using Blue Gene/P as-a-Service – Winner of the IEEE International SCALE 2011 Challenge

It is clear that the cloud model can alleviate some of the problems of HPC applications described above. The overarching goal of our IEEE SCALE 2011 demonstration was to illustrate this by showing how a cloud abstraction can be effectively used to provide a simple interface for current HPC resources and support real-world HPC applications. Specifically, we used CometCloud to essentially transform Blue Gene/P supercomputer systems into a federated elastic cloud, supporting dynamic provisioning and efficient utilization while maximizing ease-of-use through an as-a-service abstraction.

The overall configuration of the federated HPC-cloud used in the IEEE SCALE 2011 demonstration is illustrated in Figure 2. In this figure CometCloud was responsible for orchestrating the execution of the overall workflow. Note that the application components were used as-is without having to modify them. Deep Cloud, a reservation based system developed by IBM T.J. Watson Research Center, was responsible for the physical allocation of resources required to execute these tasks. The Blue Gene agent monitored the size of the tasks in the CometCloud task pool and communicated with Deep Cloud to obtain information about the current available resources. Using this information, the agent requested the appropriate allocation of Blue Gene/P resources and integrated them into the federated multi-cloud. Note that resources, which are no longer required, are deallocated.

The demonstration used a real-world ensemble application. Ensemble applications represent a significant class of HPC applications that require effective utilization of high-end Petascale and eventually Exascale systems. These applications explore large parameter spaces in order to simulate multi-scale and multiphase models and minimize uncertainty. Running ensemble applications require a large and dynamic pool of HPC resources and fast interconnects between the processing nodes.

The overall application scenario used in the demonstration is presented in Figure 3. The workflow consisted of multiple stages, each stage consisting of multiple, simultaneously running instances of IPARS (Implicit Parallel Accurate Reservoir Simulator), a black box, compute intensive oil-reservoir history matching application. The results of each stage were filtered through an Ensemble Kalman Filter (EnKF). Each IPARS instance (or ensemble member) required a varying number of processors and fast communication among these processors. Furthermore, the number of stages and number of ensemble members per stage were dynamic and depended on the specific problem and the desired level of accuracy. CometCloud was responsible for orchestrating the execution of the overall workflow, i.e. running the IPARS instances and integrating their results with the EnKF. Once the set of ensemble members associated with a stage have completed execution, the CometCloud workflow engine ran the EnKF step to process the results produced by these instances and generate the set of ensemble members for the next stage. The Blue Gene agent then dynamically adjusted resources (scaled up, down or out) to accommodate the new set of ensemble members. The entire process was repeated until the application objectives, i.e., the desired level of accuracy was achieved, and then all resources were released and final results returned to the user.


 
Figure 3: Application scenario demonstrated at IEEE SCALE 2011

The demonstration at the IEEE SCALE 2011 started by running a workflow stage with 10 initial ensemble members, where each ensemble member required between 32-128 processors. To run this, 5 partitions (32 nodes each, a total of 640 processors total) were provisioned on the IBM Blue Gene/P at Yorktown Heights, NY. The user then requested a faster time to completion, which resulted in an increase in the number of partitions provisioned to 10 (32 nodes each, a total of 1,280 processors total). This phase of the demonstration illustrated the ease of use as well as dynamic scale-up enabled using CometCloud.

In the next phase of the demonstration, the application increased the desired level of accuracy, which resulted in an increase in the number of ensemble members to 150. Maintaining the desired time to completion required a dynamic scale up in the number of resources, and the number of partitions that need to be provisioned was greater than those available at the IBM Blue Gene/P at Yorktown heights, NY (i.e., 128 partitions, 32 nodes each for a total of 16,384 processors). This resulted in CometCloud scaling out, and dynamically federating the Blue Gene/P at KAUST in Saudi Arabia. It then provisioned 22 partitions, (64 nodes each, 5,632 processors total) on this system. The ensemble members were dynamically scheduled on the federated multi-cloud composed of the two geographically distributed HPC systems, an aggregate 22,016 processors.

The project team consisted of Manish Parashar, Moustafa AbdelBaky, and Hyunjoo Kim (CAC, Rutgers Univ.), Kirk Jordan, Hani Jamjoom, Vipin Sachdeva, Zon-Yin Shae and James Sexton (IBM T.J. Watson Research Center), and Gergina Pencheva, Reza Tavakoli, and Mary F. Wheeler (CSM, UT Austin).

Team

Moustafa AbdelBaky is a Ph.D. Student at Rutgers University. Hyunjoo Kim is a Postdoctoral Associate at Rutgers University. Manish Parashar is a Professor at Rutgers University. Kirk E. Jordan is the Emerging Solutions Executive and Associate Program Director in the Computational Science Center at IBM T.J. Watson Research Center. Hani Jamjoom is a Research Manager at IBM T.J. Watson Research Center. Vipin Sachdeva is a Researcher in the Computation Science Center at IBM T.J. Watson Research Center. Zon-Yin Shae is a Researcher at IBM T.J. Watson Research Center. James Sexton is Program Director in the Computational Science Center at IBM T.J. Watson Research Center. Gergina Pencheva is a Research Associate at the Center for Subsurface Modeling at The University of Texas at Austin. Reza Tavakoli is a Postdoctoral Fellow at the Center for Subsurface Modeling at The University of Texas at Austin. Mary F. Wheeler is Ernest and Virginia Cockrell Chair in Engineering at The University of Texas at Austin.

More information can be found at http://nsfcac.rutgers.edu/icode/scale

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This