The Science Cloud Cometh

By Robert Jenkins

May 28, 2013

Mankind is currently engaged in some of the most important scientific research of our age: the discovery of the elusive Higgs particle to validate our modern understanding of physics; genomic sequencing to enrich our understanding of life on Earth and to fight diseases like cancer; and the global monitoring of the earth from space used to analyze and one day predict everything from  earthquakes to volcanic eruptions, to climate change or next year’s crop yields.

These monumental scientific undertakings have very different goals, but one important feature in common: the huge amounts of data that must be processed efficiently in order to yield accurate results. Unfortunately, the advanced computer infrastructure required to handle these big data needs are also exploding in size leading international scientific institutions such as CERN, the European Molecular Biology Laboratory (EMBL) and the European Space Agency (ESA) to look at additional sources of capacity to complement their existing in-house deployments. Without access to the right resources, researchers within these organizations can become limited by computing capacity in delivering and analyzing results.

The answer to this dilemma may lie in one of today’s most innovative computing delivery technologies: cloud computing. By taking advantage of powerful cloud computing platforms, these international scientific institutions can continue to add scale to their compute environments in a competitive and convenient way. With this dynamic in mind, a consortium of European cloud computing companies and international scientific institutions recently launched Helix Nebula, the ‘Science Cloud,’ with the dual purpose of fostering a healthier economic climate for the cloud, while giving the scientific sector access to innovative technology to promote research and scientific progress.

The key aim is to provide a multi-cloud solution that allows scientific institutions to deploy workloads seamlessly across different providers and locations. This involves harmonizing provisioning, networking, software environments and more.  In this way, such a cloud environment is able to offer a fully-scalable and customizable infrastructure that can support the varying nature of scientific research computing requirements and the high volumes of data. To put things into perspective, at CERN alone, 25 petabytes of new data are stored per year and 250,000 CPUs are in use around the world to process LHC data. The efficiency of biomedical labs sequencing DNA has outstripped Moore’s Law significantly in recent years. This has created a bottleneck with the downstream bioinformatics pipelines that rely on high performance computing infrastructures. These requirements are increasing rapidly over time.

To satisfy these high-performance computing (HPC) environments, there are several factors that need to come together to create a successful solution:

Appropriate Infrastructure

Many clouds have adopted traditional web hosting methodologies that rely on low utilization from customers and over-provisioning. Large customers – like those participating in the Helix Nebula initiative – with heavy, data-intensive workloads and HPC needs break that model. Advanced infrastructure is a required fit for that purpose. High-speed networking, between both end user sites and clouds as well as cloud to cloud, is essential. Advanced storage strategies and intelligent multi-cloud procurement and provisioning are needed to provide expanded scalability. These are to name just a few key areas of work within the Helix Nebula consortium.

Open Software and Networking Layers

Having a flexible software layer that is able to run existing systems easily is a crucial component. With an open software layer, HPC users can easily port their data and applications to the cloud with little modification – for example, CERN used the CERN VM image for workloads conducted thus far within Helix Nebula. In more restrictive cloud deployments this would not work natively. HPC users have very specific use cases and large existing installed bases, so they need the cloud to work with and not against their existing applications and knowledge.

Customization

Being able to tune cloud infrastructure to fit directly with each use case is critical. HPC users care primarily about price/performance, which is delivered through a combination of efficient resource purchasing and good performance levels. The ability to tightly fit the application layer through the virtualization layer to the actual hardware can be very important in achieving these parameters. The ability to tailor cloud infrastructure to fit the use cases closely is therefore highly desirable. In big data, for example, many applications require a great deal of RAM in comparison to CPU. The fixed server model of many dominant public cloud providers can cause significant over-provisioning of resources and destroy the economics of using such public cloud providers. Part of the Helix Nebula consortium’s efforts therefore covers ensuring participating suppliers of cloud resources are able to reflect the requirements of the scientific institutions.

True Scalability

HPC needs are often temporal – at least at a project level. For instance, CERN runs its accelerator chain in long campaigns followed by maintenance windows which change their compute consumption requirements over time. Each individual DNA sequencing and assembly run lasts for a set period. A purchasing model that can match these usage profiles as closely as possible can improve utilization and therefore cost effectiveness for research institutions. A seamless model that can accommodate the purchasing of capacity in a reserved fashion but also absorb on demand needs is very important for HPC users. Delivering this behavior using multiple cloud providers offers a greater degree of scalability and is a key aim of the Helix Nebula consortium.

There is a lot of discussion around the benefits of public and private cloud environments when it comes to business and consumer services, but a flexible cloud infrastructure without deployment restrictions suits big data and HPC needs in the scientific research sector. Such flexible cloud platforms can carry the weight of projects like those from Helix Nebula members because their approach to cloud computing emphasizes performance and flexibility, without overburdening infrastructure or overprovisioning resources, and combines that with a multi-supplier deployment model. By tapping into cutting-edge developments from the leading cloud infrastructure providers, organizations like CERN, ESA and EMBL can continue to better the world through research, without the potential future roadblocks of limited computing infrastructure resources. 

About the Author

Robert Jenkins is the co-founder and CEO of CloudSigma and is responsible for leading the technological innovation of the company’s pure-cloud IaaS offering. Under Robert’s direction, CloudSigma has established an open, customer-centric approach to the public cloud. 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: September (Part 1)

September 18, 2018

In this new bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back every Read more…

By Oliver Peckham

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and development. Among other things it would establish a National Quantu Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU--and a refresh of its inference server software packaged as Read more…

By George Leopold

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

A Crystal Ball for HPC

People are notoriously bad at predicting the future.  This very much includes experts. In the Forbes article “Why Most Predictions Are So Bad” Philip Tetlock discusses the largest and best-known test of the accuracy of expert predictions which show that any experts would do better if they make random guesses. Read more…

NSF Highlights Expanded Efforts for Broadening Participation in Computing

September 13, 2018

Today, the Directorate of Computer and Information Science and Engineering (CISE) of the NSF released a letter highlighting the expansion of its broadening participation in computing efforts. The letter was penned by Jam Read more…

By Staff

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU- Read more…

By George Leopold

DeepSense Combines HPC and AI to Bolster Canada’s Ocean Economy

September 13, 2018

We often hear scientists say that we know less than 10 percent of the life of the oceans. This week, IBM and a group of Canadian industry and government partner Read more…

By Tiffany Trader

Rigetti (and Others) Pursuit of Quantum Advantage

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle Read more…

By John Russell

How FPGAs Accelerate Financial Services Workloads

September 11, 2018

While FSI companies are unlikely, for competitive reasons, to disclose their FPGA strategies, James Reinders offers insights into the case for FPGAs as accelerators for FSI by discussing performance, power, size, latency, jitter and inline processing. Read more…

By James Reinders

Update from Gregory Kurtzer on Singularity’s Push into FS and the Enterprise

September 11, 2018

Container technology is hardly new but it has undergone rapid evolution in the HPC space in recent years to accommodate traditional science workloads and HPC systems requirements. While Docker containers continue to dominate in the enterprise, other variants are becoming important and one alternative with distinctly HPC roots – Singularity – is making an enterprise push targeting advanced scale workload inclusive of HPC. Read more…

By John Russell

At HPC on Wall Street: AI-as-a-Service Accelerates AI Journeys

September 10, 2018

AIaaS – artificial intelligence-as-a-service – is the technology discipline that eases enterprise entry into the mysteries of the AI journey while lowering Read more…

By Doug Black

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This