Cloud-Readiness and Looking Beyond Application Scaling

By Chris Downing

April 11, 2018

Editor’s note: In a follow-on to his well-received “How the Cloud Is Falling Short for HPC” article, Red Oak’s Chris Downing turns his attention to getting applications cloud-ready.

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title application readiness, lets us examine how the run-time of the job is affected by the environment we are running in. The second, workflow readiness, forces us to think more broadly about how the jobs fit in to our day-to-day activities, and how effectively we are getting things done.

Application readiness

Application performance is fairly well understood in the HPC community. We go to great lengths to benchmark codes and determine the optimum job parameters based on the scaling characteristics observed. We avoid overheads and penalties such as those arising from virtualisation, and we insist on the most performant hardware our budgets can stretch to.

There are a few simple steps application developers can take to make their software more amenable to running in the cloud. The most crucial is a sane approach to checkpointing – the majority of well-developed apps do this by default, but it is a feature which could easily be overlooked in a home-spun tool which gradually grows in popularity and scope. Efficient checkpoint mechanisms are crucial to on-premise HPC, but even more so in the cloud where pre-emptible instances will be the de facto job environment.

Another aspect to consider is the potential for changes to temporary storage. The overwhelming majority of HPC applications write their outputs to simple text files, with the more keenly developed software making use of the likes of HDF5 or NetCDF to manage their data. Co-existence of HPC workloads with enterprise IT tools allows us to open up a few new avenues of research when figuring out how to deliver better performance – the simplest of which would be the use of databases. Running multiple “production” databases on a HPC cluster is not common due to the perceived fragility of the infrastructure, but in the cloud, it would be trivial. Depending on the application, a database could offer performance benefits in the analysis phase, as well as opening the door to providing results of large simulations to the wider community as a service.

Finally, users should remember that the many (perhaps most) applications do not scale particularly well anyway or are often only ran over a small number of nodes – in that case, using fewer cores for a longer duration is more efficient provided a longer wait is tolerable. While the poor price/performance of public clouds for multi-node scientific computing can easily be interpreted as a reason not to use these resources, it should instead be thought of as a gentle shove away from wasteful practices, and towards patience. The focus for applications running in the cloud should therefore be on extracting value from the outputs, which is a workflow problem rather than an application one.

Workflow readiness

The workflow which surrounds and links together applications is another area where optimisation will need to occur and is arguably the area where we ought to focus our attention when considering the cloud. At the design-of-experiments level, researchers who are being steered towards cloud usage should consider whether their research project is making best use of the available resource scaling. The sort of large scale, embarrassingly-parallel parameter space exploration which might have struggled to get approval to run on a crowded HPC system is a perfect model for the cloud – the researcher is effectively limited only by their budget and their ability to deal with the job outputs.

Storage utilisation is another area where workflows can be optimised for the cloud. When jobs directly interact with a permanent file system as is the case for traditional HPC, users do not need to worry much about what state their data is in until they actually want to perform their analysis. The same model could work in the cloud, but the ephemeral nature of cloud resources means that each job would need to first get data out of a separate persistent store (likely an object storage service), then put the file back at the end. Rather than seeing this as a nuisance, users should consider whether “serverless” computing offers a route to turn these put/get steps into part of an automated data analysis pipeline, for example by running data cleansing or analysis scripts programmatically. Rather than the user waiting for jobs to finish them performing a series of manual steps to extract something valuable, portions of the analysis can be turned into a scripted procedure which occurs automatically once the necessary data are available.

Containerised workflows are increasingly popular in HPC with Singularity leading the charge towards making reproducible user-defined environments the norm. Running in a container makes HPC jobs portable, both between different on-premise systems and between physical and cloud resources. Combining containerised applications with general-purpose serverless analysis scripts, it is easy to imagine how a community of researchers using the same code might be able to put together a set of computational and analysis pipelines, leading to more standardised outputs and easing the process of turning discoveries into publication-ready results. More importantly – rather than just sharing their outputs, researchers would have an easier way to share their whole pipeline. This might raise some questions regarding competition but is surely the best route to improving the reproducibility of science.

Making it happen

Most of the modifications described here are well outside the comfort zone of a novice research software engineer. Likewise, refactoring crusty Fortran code to accommodate modern system architectures is likely to be just as unappealing to the new wave of computer scientists as working on mainframe Cobol would be – perhaps even less so, given the likely salary differential. There is therefore room in the middle for a new skillset, one which brings together an interest in scientific computing with an acceptance that traditional HPC cluster designs might not be the future – something like Scientific DevOps.

As with “normal” research software engineering in years past (and, some would argue, still to this day), the problem will inevitably be money. Paying people to churn out publications as part of the process of scientific discovery is accepted practice but exploring new methods of how to get stuff done has proved to be a much tougher sell. Those responsible for dishing out grant money tend to be somewhat cautious, and traditionalists.

We should therefore be looking to the cloud providers themselves to drive this innovation – as the adage goes, you need to spend money to make money, and right now a large pool of scientific computing users are lagging far behind their enterprise counterparts in cloud adoption. Tapping into this market will naturally require some investment on the part of Amazon, Google and Microsoft – but they should recognise that people and skills are more important than new features when splashing around their marketing budget.

About the Author

Chris Downing joined Red Oak Consulting @redoakHPC in 2014 on completion of his PhD thesis in computational chemistry at University College London. Having performed academic research using the last two UK national supercomputing services (HECToR and ARCHER) as well as a number of smaller HPC resources, Chris is familiar with the complexities of matching both hardware and software to user requirements. His detailed knowledge of materials chemistry and solid-state physics means that he is well-placed to offer insight into emerging technologies. Chris, Senior Consultant, has a highly technical skill set working mainly in the innovation and research team providing a broad range of technical consultancy services. To find out more www.redoakconsulting.co.uk.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPC Career Notes: July 2020 Edition

July 1, 2020

In this monthly feature, we'll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it's a promotion, new company hire, or even an accolade, we've got Read more…

By Mariana Iriarte

Supercomputers Enable Radical, Promising New COVID-19 Drug Development Approach

July 1, 2020

Around the world, innumerable supercomputers are sifting through billions of molecules in a desperate search for a viable therapeutic to treat COVID-19. Those molecules are pulled from enormous databases of known compoun Read more…

By Oliver Peckham

HPC-Powered Simulations Reveal a Looming Climatic Threat to Vital Monsoon Seasons

June 30, 2020

As June draws to a close, eyes are turning to the latter half of the year – and with it, the monsoon and hurricane seasons that can prove vital or devastating for many of the world’s coastal communities. Now, climate Read more…

By Oliver Peckham

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This year is no different though the conversion of ISC to a digital Read more…

By John Russell

What’s New in HPC Research: Mosquitoes, [email protected], the Last Journey & More

June 29, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

AWS Solution Channel

Maxar Builds HPC on AWS to Deliver Forecasts 58% Faster Than Weather Supercomputer

When weather threatens drilling rigs, refineries, and other energy facilities, oil and gas companies want to move fast to protect personnel and equipment. And for firms that trade commodity shares in oil, precious metals, crops, and livestock, the weather can significantly impact their buy-sell decisions. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Recent U.S. events, most poignantly the killing of George Floy Read more…

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

ISC 2020 Keynote: Hope for the Future, Praise for Fugaku and HPC’s Pandemic Response

June 24, 2020

In stark contrast to past years Thomas Sterling’s ISC20 keynote today struck a more somber note with the COVID-19 pandemic as the central character in Sterling’s annual review of worldwide trends in HPC. Better known for his engaging manner and occasional willingness to poke prickly egos, Sterling instead strode through the numbing statistics associated... Read more…

By John Russell

ISC 2020’s Student Cluster Competition Winners Announced

June 24, 2020

Normally, the Student Cluster Competition involves teams of students building real computing clusters on the show floors of major supercomputer conferences and Read more…

By Oliver Peckham

Hoefler’s Whirlwind ISC20 Virtual Tour of ML Trends in 9 Slides

June 23, 2020

The ISC20 experience this year via livestreaming and pre-recordings is interesting and perhaps a bit odd. That said presenters’ efforts to condense their comments makes for economic use of your time. Torsten Hoefler’s whirlwind 12-minute tour of ML is a great example. Hoefler, leader of the planned ISC20 Machine Learning... Read more…

By John Russell

At ISC, the Fight Against COVID-19 Took the Stage – and Yes, Fugaku Was There

June 23, 2020

With over nine million infected and nearly half a million dead, the COVID-19 pandemic has seized the world’s attention for several months. It has also dominat Read more…

By Oliver Peckham

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

Contributors

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This