One Small Step for Man, One Giant Leap for HPC and Cloud

By Nicole Hemsoth

May 5, 2010

It’s census time and the galaxy is not exempt.

If you’ve been following any news at all in the astronomy field, you’re likely already aware of the massive project undertaken by the European Space Agency called Gaia, which seeks to create a galactic map that shows far more than the present composition of our galaxy. According to the European Space Agency (ESA), the project “is an ambitious mission to chart a three-dimensional map of our galaxy, the Milky Way, and in the process, reveal the composition, formation and evolution of the galaxy. Gaia will provide unprecedented positional and radial velocity measurements with the accuracies needed to produce a stereoscopic and kinematic census of about one billion stars in our galaxy and throughout the local group. This amounts to about 1 percent of the galactic stellar population.”

If your mind is not sufficiently blown by the very concept of Gaia’s aims (let alone current state of progress, which you can read about in detail), consider this — Gaia’s infrequent but immense demands for mission-critical data processing created the prime opportunity for one of the most convincing proof of concept measures for cloud and HPC. This test of the possibility for cloud to effectively handle such demands was tackled by cloud infrastructure and development giants, The Server Labs and RightScale.

The Server Labs led a recent feasibility study to test the limits of Amazon’s EC2/S3 as it ran data-intensive scientific applications in the cloud. The project executed a distributed astrometric process developed for the Gaia mission to show the world how cloud computing could prove to be a cost-effective solution for HPC applications.

This feasibility study set out to display the possibilities of running complex scientific applications in the cloud. Since the demands of the project were not constant and processing of the massive amounts of data was only undertaken on a semi-regular basis, the cloud proved to be the most appealing host — a fact that allowed the Server Labs along with RightScale to demonstrate how the cloud could be deployed to serve as a cost and resource-saving measure that would prevent the European Space Agency from having to construct its own specialized center to handle the occasional heavy-duty processing demands.

To get at the heart of some of the inherent challenges, surprises, benefits, and trouble spots involved with the viability study for running HPC applications in the cloud, it was necessary to go directly to the source and ask some of the key players to offer their impressions of the success of the study to prove the capability for HPC to operate in the cloud while allowing all of the quintessential benefits — most notably cost savings and efficiency.  Along the way, it became clear that this is some of the most promising news on the HPC and cloud front that’s come along in several weeks. The sheer scale of the data processing, the approximate value of the overall saving of resources — monetary and otherwise, and the ability to migrate to the cloud are all signs that HPC in the cloud has a chance to catch on in mainstream HPC — soon anyway.

According to Paul Parsons, CTO and Chief Architect at The Server Labs and Alfonso Olias, Senior Consultant with Server Labs, the challenges inherent to the ESA’s Gaia project presented the perfect opportunity to test the viability of cloud in an HPC context.

“After the launch of the Gaia satellite the project required some complex astrometric data processing to be executed every six months. This type of non-constant processing lends itself to the cloud. The study intended to prove that the processing can be run in Amazon EC2 at a much lower cost, which would enable the European Space agency to delay or avoid the purchase of in-house hardware to do the job. We are currently undertaking a second feasibility study to compare the performance of Oracle with Amazon S3 for read-only data storage and to evaluate if the system can scale out to 1000 high CPU EC2 nodes, each of which have 8 cores. The European Space Agency will be using the cloud to do some pre-launch testing.”

As one might imagine, since this was something of an experiment, there were, of course, some initial challenges, but these yielded some pleasant surprises as well, including the migration process — which is so often cited as one of the initial barriers enterprise and HPC users consider as they weigh the costs, benefits, and overall value for running their HPC applications in the cloud. As Paul Parsons noted,

We first set out to evaluate if the astrometric processing could be run in the cloud at all. The subsequent aims were to identify the architectural challenges and to assess the financial impact of running the Gaia project’s HPC data processing in the cloud. The surprise for us was that the process of migrating to the cloud was relatively painless. The architecture did not need to be changed at all, proving we had designed a well-architected loosely-coupled system.

Aside from concerns about migration, the other critical factor in considerations about running HPC applications in the cloud is, quite simply, performance. While this is going to be an issue until technology, capability and capacity are more aligned, the project did provide a few cues that overall performance does not need to be a barrier as there are workarounds that can suffice until more progress is made to improve performance in HPC and the cloud. As Parsons and Olias stated of their experiences,

Traditionally HPC has not been a good candidate for cloud computing due to its requirement for tight integration between server nodes via low-latency interconnects. The performance overhead associated with virtualization, a prerequisite technology for migrating local applications to the cloud, hits scalability and efficiency in an HPC context. High-speed networking is also a critical requirement for HPC as clusters of servers and storage need to be able to communicate as fast as possible between them. This will change in the future as cloud providers launch products more apt for HPC. However, as we proved in the Gaia project, the possibility of provisioning more nodes than would be possible in an in-house cluster provides us a means to circumvent these barriers to a certain extent.

Many HPC customers have a high investment in technologies such as MPI and InfiniBand, and we therefore believe that allowing MPI and providing high-speed networking in the cloud are critical requirements. Is cloud scalable for petascale computing and beyond? Yes. But is cloud ready for high-speed networking? The Infiniband performance gap is increasing but improvements are also being made in the 10 GigE area so we will have to wait to see how public cloud providers such as Amazon’s EC2 take on the challenge.

It is hard to deny that this proof of concept task did what it set out to do — it proved that there are sustainable uses for HPC in the cloud for very large data-intensive applications and furthermore, that with more advances in technology, the questions about whether HPC and cloud are aligned will begin to dwindle.

By using cloud-based technologies scientists and engineers can have on-demand access to large distributed infrastructures and completely customize their execution environment. Cloud computing provides the ability to scale up and down the computing infrastructure according to the requirements at any given time. Although cloud technologies are enough for distributed computing, they do not cope with all HPC applications that have tighter constraints. Basically it will depend on the demands of HPC customers and whether the industry is willing to offer a competitive solution. A lot of effort is being made in this arena by cloud service providers. Cloud allows economies of scale and pay per use so expect an evolution of cloud in order to meet HPC requirements.

There are many benefits to cloud; no upfront costs, pay as you go billing model, virtually infinite computing resources, etc. As energy prices will increase in forthcoming years power consumption becomes another important issue for HPC clouds. As the clouds become bigger, economies of scale will allow for lower energy costs as compared to small in-house clusters. Some large HPC customers we talked to recently are very interested in looking at the cloud because, as they pointed out, they are not in the business of making datacenters.

Cloud computing can be a cost-effective solution for many HPC applications. Think about the opportunity cost between building your own datacenter or deploying and running an application in the cloud within minutes. Cloud computing provides flexibility, elasticity and the illusion of infinite computing resources. As technology matures we will see more HPC applications moving to the cloud.

The overall conclusion that the team from The Server Labs came up with is that there is a bright future for HPC in the cloud — but that future is still somewhat out of reach for mainstream HPC. The benefits might be clear, but until there are more proof of concept projects like theirs, this future where HPC is ideal for the cloud can still rightfully be considered in the more distant future — how distant depends, of course, like all other things in HPC and cloud (or both together) on further research, development, and efforts to prove what seems viable in theory — cost and resource savings without dramatic reductions in performance.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This