Cloud-Readiness and Looking Beyond Application Scaling

By Chris Downing

April 11, 2018

Editor’s note: In a follow-on to his well-received “How the Cloud Is Falling Short for HPC” article, Red Oak’s Chris Downing turns his attention to getting applications cloud-ready.

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title application readiness, lets us examine how the run-time of the job is affected by the environment we are running in. The second, workflow readiness, forces us to think more broadly about how the jobs fit in to our day-to-day activities, and how effectively we are getting things done.

Application readiness

Application performance is fairly well understood in the HPC community. We go to great lengths to benchmark codes and determine the optimum job parameters based on the scaling characteristics observed. We avoid overheads and penalties such as those arising from virtualisation, and we insist on the most performant hardware our budgets can stretch to.

There are a few simple steps application developers can take to make their software more amenable to running in the cloud. The most crucial is a sane approach to checkpointing – the majority of well-developed apps do this by default, but it is a feature which could easily be overlooked in a home-spun tool which gradually grows in popularity and scope. Efficient checkpoint mechanisms are crucial to on-premise HPC, but even more so in the cloud where pre-emptible instances will be the de facto job environment.

Another aspect to consider is the potential for changes to temporary storage. The overwhelming majority of HPC applications write their outputs to simple text files, with the more keenly developed software making use of the likes of HDF5 or NetCDF to manage their data. Co-existence of HPC workloads with enterprise IT tools allows us to open up a few new avenues of research when figuring out how to deliver better performance – the simplest of which would be the use of databases. Running multiple “production” databases on a HPC cluster is not common due to the perceived fragility of the infrastructure, but in the cloud, it would be trivial. Depending on the application, a database could offer performance benefits in the analysis phase, as well as opening the door to providing results of large simulations to the wider community as a service.

Finally, users should remember that the many (perhaps most) applications do not scale particularly well anyway or are often only ran over a small number of nodes – in that case, using fewer cores for a longer duration is more efficient provided a longer wait is tolerable. While the poor price/performance of public clouds for multi-node scientific computing can easily be interpreted as a reason not to use these resources, it should instead be thought of as a gentle shove away from wasteful practices, and towards patience. The focus for applications running in the cloud should therefore be on extracting value from the outputs, which is a workflow problem rather than an application one.

Workflow readiness

The workflow which surrounds and links together applications is another area where optimisation will need to occur and is arguably the area where we ought to focus our attention when considering the cloud. At the design-of-experiments level, researchers who are being steered towards cloud usage should consider whether their research project is making best use of the available resource scaling. The sort of large scale, embarrassingly-parallel parameter space exploration which might have struggled to get approval to run on a crowded HPC system is a perfect model for the cloud – the researcher is effectively limited only by their budget and their ability to deal with the job outputs.

Storage utilisation is another area where workflows can be optimised for the cloud. When jobs directly interact with a permanent file system as is the case for traditional HPC, users do not need to worry much about what state their data is in until they actually want to perform their analysis. The same model could work in the cloud, but the ephemeral nature of cloud resources means that each job would need to first get data out of a separate persistent store (likely an object storage service), then put the file back at the end. Rather than seeing this as a nuisance, users should consider whether “serverless” computing offers a route to turn these put/get steps into part of an automated data analysis pipeline, for example by running data cleansing or analysis scripts programmatically. Rather than the user waiting for jobs to finish them performing a series of manual steps to extract something valuable, portions of the analysis can be turned into a scripted procedure which occurs automatically once the necessary data are available.

Containerised workflows are increasingly popular in HPC with Singularity leading the charge towards making reproducible user-defined environments the norm. Running in a container makes HPC jobs portable, both between different on-premise systems and between physical and cloud resources. Combining containerised applications with general-purpose serverless analysis scripts, it is easy to imagine how a community of researchers using the same code might be able to put together a set of computational and analysis pipelines, leading to more standardised outputs and easing the process of turning discoveries into publication-ready results. More importantly – rather than just sharing their outputs, researchers would have an easier way to share their whole pipeline. This might raise some questions regarding competition but is surely the best route to improving the reproducibility of science.

Making it happen

Most of the modifications described here are well outside the comfort zone of a novice research software engineer. Likewise, refactoring crusty Fortran code to accommodate modern system architectures is likely to be just as unappealing to the new wave of computer scientists as working on mainframe Cobol would be – perhaps even less so, given the likely salary differential. There is therefore room in the middle for a new skillset, one which brings together an interest in scientific computing with an acceptance that traditional HPC cluster designs might not be the future – something like Scientific DevOps.

As with “normal” research software engineering in years past (and, some would argue, still to this day), the problem will inevitably be money. Paying people to churn out publications as part of the process of scientific discovery is accepted practice but exploring new methods of how to get stuff done has proved to be a much tougher sell. Those responsible for dishing out grant money tend to be somewhat cautious, and traditionalists.

We should therefore be looking to the cloud providers themselves to drive this innovation – as the adage goes, you need to spend money to make money, and right now a large pool of scientific computing users are lagging far behind their enterprise counterparts in cloud adoption. Tapping into this market will naturally require some investment on the part of Amazon, Google and Microsoft – but they should recognise that people and skills are more important than new features when splashing around their marketing budget.

About the Author

Chris Downing joined Red Oak Consulting @redoakHPC in 2014 on completion of his PhD thesis in computational chemistry at University College London. Having performed academic research using the last two UK national supercomputing services (HECToR and ARCHER) as well as a number of smaller HPC resources, Chris is familiar with the complexities of matching both hardware and software to user requirements. His detailed knowledge of materials chemistry and solid-state physics means that he is well-placed to offer insight into emerging technologies. Chris, Senior Consultant, has a highly technical skill set working mainly in the innovation and research team providing a broad range of technical consultancy services. To find out more www.redoakconsulting.co.uk.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Rabies, Smog, Robots & More

October 14, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ll get there at last month’s MIT-IBM Watson AI Lab’s AI Read more…

By John Russell

Summit Simulates Braking – on Mars

October 14, 2019

NASA is planning to send humans to Mars by the 2030s – and landing on the surface will be considerably trickier than landing a rover like Curiosity. To solve the problem, NASA researchers are using the world’s fastes Read more…

By Staff report

Chaminade University’s Immersion Program Builds Capacity for Data Science in Hawaii, Pacific Region

October 10, 2019

Kuleana is a uniquely Hawaiian value and practice which embodies responsibility to self, community, and the ‘aina' (land). At Chaminade University, a federally designated Native Hawaiian serving university in Hawai‘i Read more…

By Faith Singer-Villalobos

Trovares Drives Memory-Driven, Property Graph Analytics Strategy with HPE

October 10, 2019

Trovares, a high performance property graph analytics company, has partnered with HPE and its Superdome Flex memory-driven servers on a cybersecurity capability the companies say “routinely” runs near-time workloads on 24TB-capacity systems... Read more…

By Doug Black

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

HPE Extreme Performance Solutions

Intel FPGAs: More Than Just an Accelerator Card

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

HPC in the Cloud: Avoid These Common Pitfalls

[Connect with LSF users and learn new skills in the IBM Spectrum LSF User Community.]

It seems that everyone is experimenting about cloud computing. Read more…

Intel, Lenovo Join Forces on HPC Cluster for Flatiron

October 9, 2019

An HPC cluster with deep learning techniques will be used to process petabytes of scientific data as part of workload-intensive projects spanning astrophysics to genomics. AI partners Intel and Lenovo said they are providing... Read more…

By George Leopold

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Summit Simulates Braking – on Mars

October 14, 2019

NASA is planning to send humans to Mars by the 2030s – and landing on the surface will be considerably trickier than landing a rover like Curiosity. To solve Read more…

By Staff report

Trovares Drives Memory-Driven, Property Graph Analytics Strategy with HPE

October 10, 2019

Trovares, a high performance property graph analytics company, has partnered with HPE and its Superdome Flex memory-driven servers on a cybersecurity capability the companies say “routinely” runs near-time workloads on 24TB-capacity systems... Read more…

By Doug Black

Intel, Lenovo Join Forces on HPC Cluster for Flatiron

October 9, 2019

An HPC cluster with deep learning techniques will be used to process petabytes of scientific data as part of workload-intensive projects spanning astrophysics to genomics. AI partners Intel and Lenovo said they are providing... Read more…

By George Leopold

Optimizing Offshore Wind Farms with Supercomputer Simulations

October 9, 2019

Offshore wind farms offer a number of benefits; many of the areas with the strongest winds are located offshore, and siting wind farms offshore ameliorates many of the land use concerns associated with onshore wind farms. Some estimates say that, if leveraged, offshore wind power... Read more…

By Oliver Peckham

Harvard Deploys Cannon, New Lenovo Water-Cooled HPC Cluster

October 9, 2019

Harvard's Faculty of Arts & Sciences Research Computing (FASRC) center announced a refresh of their primary HPC resource. The new cluster, called Cannon after the pioneering American astronomer Annie Jump Cannon, is supplied by Lenovo... Read more…

By Tiffany Trader

NSF Announces New AI Program; Plans $120M in Funding Next Year

October 8, 2019

As the saying goes, when you’re hot, you’re hot. Right now, AI is scalding. Today the National Science Foundation announced a new AI initiative – The National Artificial Intelligence Research Institutes program – with plans to invest about “$120 million in grants next year... Read more…

By Staff report

DOE Sets Sights on Accelerating AI (and other) Technology Transfer

October 3, 2019

For the past two days DOE leaders along with ~350 members from academia and industry gathered in Chicago to discuss AI development and the ways in which industr Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Quantum Bits: Neven’s Law (Who Asked for That), D-Wave’s Steady Push, IBM’s Li-O2- Simulation

July 3, 2019

Quantum computing’s (QC) many-faceted R&D train keeps slogging ahead and recently Japan is taking a leading role. Yesterday D-Wave Systems announced it ha Read more…

By John Russell

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

ISC Keynote: Thomas Sterling’s Take on Whither HPC

June 20, 2019

Entertaining, insightful, and unafraid to launch the occasional verbal ICBM, HPC pioneer Thomas Sterling delivered his 16th annual closing keynote at ISC yesterday. He explored, among other things: exascale machinations; quantum’s bubbling money pot; Arm’s new HPC viability; Europe’s... Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This