Cloud Computing Will Usher in a New Era of Science Discovery

By Gilad Shainer, Brian Sparks, Scot Schultz, Eric Lantz, William Liu, Tong Liu, and Goldi Misra

January 26, 2010

Computational science is the field of study concerned with constructing mathematical models and numerical techniques that represent scientific, social scientific or engineering problems and employing these models on computers, or clusters of computers to analyze, explore or solve these models. Numerical simulation enables the study of complex phenomena that would be too expensive or dangerous to study by direct experimentation. The quest for ever-higher levels of detail and realism in such simulations requires enormous computational capacity, and has provided the impetus for breakthroughs in computer algorithms and architectures.

Due to these advances, computational scientists and engineers can now solve large-scale problems that were once thought intractable by creating the related models and simulate them via high performance compute clusters or supercomputers. Simulation is being used as an integral part of the manufacturing, design and decision-making processes, and as a fundamental tool for scientific research. Problems where high performance simulation play a pivotal role include for example weather and climate prediction, nuclear and energy research, simulation and design of vehicles and aircrafts, electronic design automation, astrophysics, quantum mechanics, biology, computational chemistry and more.

Computation is commonly considered the third mode of science, where the previous modes or paradigms were experimentation/observation and theory. In the past, science was performed by observing evidence of natural or social phenomena, recording measurable data related to the observations, and analyzing this information to construct theoretical explanations of how things work. With the introduction of high performance supercomputers, the methods of scientific research could include mathematical models and simulation of phenomenon that are too expensive or beyond our experiment’s reach. With the advent of cloud computing, a fourth mode of science is on the horizon.

The concept of computing “in a cloud” is typically referred as a hosted computational environment (could be local or remote) that can provide elastic compute and storage services for users per demand. Therefore the current usage model of cloud environments is aimed at computational science. But future clouds can serve as environments for distributed science to allow researchers and engineers to share their data with their peers around the globe and allow expensive achieved results to be utilized for more research projects and scientific discoveries.

To allow the shift to the fourth mode of “science discovery,” cloud environments will need not only to provide capability to share the data created by the computational science and the various observations results, but also to be able to provide cost-effective high performance computing capabilities, similar to that of today’s leading supercomputers, in order to be able to rapidly and effectively analyze the data flood. Moreover, an important criteria of clouds need to be fast provisioning of the cloud resources, both compute and storage, in order to service many users, many different analysis and be able to suspend tasks and bring them back to life in a fast manner. Reliability is another concern, and clouds need to be able to be “self healing” clouds where failing components can be replaced by spares or on-demand resources to guarantee constant access and resource availability.

The use of grids for scientific computing has become successful in the fast years and many international projects led to the establishment of worldwide infrastructures available for computational science. The Open Science Grid provides support for data-intensive research for different disciplines such as biology, chemistry, particle physics, and geographic information systems. Enabling Grid for ESciencE (EGEE) is an initiative funded by the European Commission that connects more than 91 institutions in Europe, Asia, and United States of America, to construct the largest multi-science computing grid infrastructure of the world. TeraGrid is an NSF funded project that provides scientists with a large computing infrastructure built on top of resources at nine resource provider partner sites. It is used by 4000 users at over 200 universities that advance research in molecular bioscience, ocean science, earth science, mathematics, neuroscience, design and manufacturing, and other disciplines. While grids can provide a good infrastructure for shared science and data analysis, several issues make the grids problematic to lead the fourth mode of science — limited software flexibility, applications typically need to be pre-packaged, non elasticity and lack of virtualization. Those missing items can be delivered through cloud computing.

Cloud computing addresses many of the aforementioned problems by means of virtualization technologies, which provide the ability to scale up and down the computing infrastructure according to given requirements. By using cloud-based technologies scientists can have easy access to large distributed infrastructures and completely customize their execution environment. Furthermore, effective provisioning can support many more activities and suspend or bring to life activities in an instant. This makes the spectrum of options available to scientists wide enough to cover any specific need for their research.

In many scientific fields of studies, the instruments are extremely expensive, and as such, the data must be shared. With this data explosion and as high performance systems become a commodity infrastructure, the pressure to share scientific data is increasing. That resonates well with the emerging cloud computing trend. While for the moment cloud computing appears to be a cost effective alternative for IT spending, or the shift of enterprise IT centers from capital expense to operational expense, research institutes have started exploring how cloud computing can create the desired compute centralization and an environment for researchers to chare and crunch the flood of data. One example is the new system at the National Energy Research Scientific Computing Center (US), named “Magellan.” While Magellan’s initial target is to provide a tool for computational science in a cloud environment, it can be easily modified to become a center for data processing accessed by many researchers and scientists

Until recently, high performance computing has not been a good candidate for cloud computing due to its requirement for tight integration between server nodes via low-latency interconnects. The performance overhead associated with host virtualization, a prerequisite technology for migrating local applications to the cloud, quickly erodes application scalability and efficiency in an HPC context. The new virtualization solutions such as KVM and XEN aim to solve the performance issue by allowing native performance capabilities from the virtual machines by reducing the virtualization management overhead and by allowing direct access from the virtual machines to the network.

High-speed networking is a critical requirement for affordable high performance computing, as clusters of servers and storage need to be able to communicate as fast as possible between them. A vast majority of the world top 100 supercomputers are using the high-speed InfiniBand networking due to this reason, and the interconnect allows those systems to reach to more than 90 percent efficiency, a critical element for effective for high performance computing in any infrastructure, including clouds. National Energy Research Scientific Computing Center (NERSC, US) “Magellan” system is using InfiniBand as the interconnect to provide the fastest connection between servers and storage in order to allow the maximum gain from the system, highest efficiency and an infrastructure that will be able to analyze data in real time.

Power consumption is another important issue for high performance clouds. As the HPC clouds become bigger, affordability of science discovery will be determined by the ability so the save the costs of the power and cooling. Power management, which is implemented within the CPUs, the interconnect and the system management and scheduling will need to be integrated as a comprehensive solution. Non utilized sections of the clouds need to be powered off or moved into power saving states and the scheduling mechanism will need to incorporate topology awareness.

The HPC Advisory Council HPC|Cloud group is working to investigate the creation and usage models of clouds in HPC. Past activities on smart scheduling mechanisms have been published on the council’s Web site, and future results will include the usage of KVM and XEN, manycore CPUs (such as AMD’s Magny-Cours which includes 12 cores in a single CPU) and cloud management software (such as Platform ISF) will be published throughout 2010. The HPC Advisory Council will continue to investigate the emerging technologies and aspects that will lead us into the fourth mode of science.

Acknowledgments

The authors would like to thank Cydney Stevens for her vision and guidance.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art of “The Grand Hotel Of The West,” contrasted nicely with Read more…

By Arno Kolster

Google Cloud Makes Good on Promise to Add Nvidia P100 GPUs

September 21, 2017

Google has taken down the notice on its cloud platform website that says Nvidia Tesla P100s are “coming soon.” That's because the search giant has announced the beta launch of the high-end P100 Nvidia Tesla GPUs on t Read more…

By George Leopold

Cray Wins $48M Supercomputer Contract from KISTI

September 21, 2017

It was a good day for Cray which won a $48 million contract from the Korea Institute of Science and Technology Information (KISTI) for a 128-rack CS500 cluster supercomputer. The new system, equipped with Intel Xeon Scal Read more…

By John Russell

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

Adolfy Hoisie to Lead Brookhaven’s Computing for National Security Effort

September 21, 2017

Brookhaven National Laboratory announced today that Adolfy Hoisie will chair its newly formed Computing for National Security department, which is part of Brookhaven’s new Computational Science Initiative (CSI). Read more…

By John Russell

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art o Read more…

By Arno Kolster

Stanford University and UberCloud Achieve Breakthrough in Living Heart Simulations

September 21, 2017

Cardiac arrhythmia can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, Read more…

By Wolfgang Gentzsch, UberCloud, and Francisco Sahli, Stanford University

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

September 21, 2017

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENAT Read more…

By John Russell

Exascale Computing Project Names Doug Kothe as Director

September 20, 2017

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. He replaces Paul Messina, who is stepping down after two years to return to Argonne National Laboratory. Kothe is a 32-year veteran of DOE’s National Laboratory System. Read more…

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conference in Barcelona. In conjunction with her presentation, Yelick agreed to a short Q&A discussion with HPCwire. Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

June 13, 2017

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This