Challenges Ahead for HPC Applications in the Cloud

By Dr. Mohamed Ahmed

August 16, 2010

High performance computing is known for its ability to accelerate scientific experiments and discovery through modeling and simulation. The complexity of the mathematical models and huge amounts of data that must be processed in a short time mandates the use of high throughput hardware infrastructure and optimized software stack. HPC applications very quickly consume processing power, memory, storage, and network bandwidth for a relatively short time. Other distributed systems utilize the aggregated power of the cluster but do not necessary utilize all available resources as aggressively in such a short amount of time as HPC applications do.

In essence, this is why cloud services and infrastructure may make a perfect sense to most businesses. It is expected that 80% of the general purpose applications will be hosted in clouds by year 2020. However, for HPC, there needs to be a deeper analysis of how HPC users can make use of cloud architectures.

So, the main objective of cloud computing is to allow end users to plug their applications into virtual machines in a manner quite similar to hosting them on physically dedicated machines. Users should be able to access and manage this infrastructure exactly the same way they would if they had the physical machines on-premises.

In HPC, applications are developed to deal with large numbers of compute nodes, with relatively large memories and huge storage capacities. Keep in mind that cloud services can be provided at two levels: (1) cloud infrastructure, or (2) cloud-hosted Applications. The first type targets advanced users who would like to utilize cloud infrastructure to build their own proprietary software serving their specific needs. The second type targets users who would like to use readymade applications running on top of the cloud infrastructure without digging into the details of the virtualized resources exposed by the cloud, such as storage, processing, interconnection, etc.

In this article I will be focusing mainly on the virtualization of cloud infrastructure and usage pattern of its recourses. I’ll briefly touch upon possible HPC applications that can be offered through cloud Infrastructure and characterize their utilization of resources in such infrastructure.

Before digging into how the cloud infrastructure can expose it services to HPC users, let’s focus first on the building unit: a virtualized node. Node virtualization is not a straight forward task for HPC usage patterns. Let me go with you through the possible usage patterns. It may appear as a low level analysis, but I think this will give us a deeper understanding of what is actually required to build HPC in the cloud.

I’ll be discussing processing, memory, storage, and network usage patterns. I’ll try to uncover also some of the overall required policies and mechanism for resources management and scheduling. This is very critical aspect in providing the appropriate services to HPC users through the cloud.

Processing Patterns
 
HPC applications are, to a great extent, scientific algorithms focusing on simulating mathematical models in earth science, chemistry, physics, etc. In addition to the main objective of utilizing the aggregated processing power of large HPC clusters, these applications focus also on utilizing micro resources inside each processor, especially with multi-core processors. Utilizing multi-threading for a fine-grained parallelism is a very critical component in speeding up these applications. Also, these applications utilize even more specific processor features such as the pipeline organization, branch prediction, and instructions prefetching to speedup execution.

The other family of HPC applications is based on combinatorial algorithms, such as graph traversal, sorting, and string matching. These algorithms utilize basically integer units inside the microprocessor. However, they still utilize multi-threading capabilities inside each compute node to speedup execution.

In general purpose and business-oriented applications, multi-threaded models might be utilized. However, multi-threaded models are deployed to serve high level requests, such as different database transactions in order to gain execution speedup. Threads can be easily mapped to virtual processors and scheduled by the OS to the physical processors.

It is quite challenging to manage virtualization of processors if accelerators are provided in the cloud, such as the Cell processor or GPGPUs. Each process may utilize one or more GPUs to accelerate some compute intensive parts, or kernels. The question is: how can we virtualize and schedule these accelerators? Are they going to accessible directly through hosted applications? Or a lightweight virtualization mechanism responsible mainly for scheduling and accounting the accelerators? Some research efforts such as GViM and GFusion are actively working on the area of accelerators virtualization.

Memory

Most HPC applications swing between memory intensity and arithmetic intensity. The more floating point operations (flops) required per one byte accessed from system’s main memory, the higher the application’s arithmetic intensity, and vice versa. The key here is not only the amount of memory required in a single virtualized node. It is also about the usage patterns related to processing requirements. HPC applications usually use memory in a very demanding pattern. The better the algorithm is designed, the peak bandwidth is maintained most of the algorithm’s execution life time.

Furthermore, as arithmetic intensity decreases,more pressure is added over the memory system. The processor is spending less time computing and more time moving data to or from system’s memory. Also, advanced HPC developers often times consider physical properties of the memory system to maximize bandwidth, such as number of banks, size of memory controller buffer, latency, maximum bandwidth, etc.

I think standard virtualization abstracts all these hardware properties and consider the standard memory usage pattern, i.e. small requests that do not form stream of data movement. I also believe that some good research in the area of memory abstraction can be done. Hypervisors need to consider multiplexing the physical memory in a way that would maintain most of its physical properties. This should give more space for memory performance optimization.   

Storage

HPC applications need two types of permanent storage: (1) I/O, and (2) Scratch storage. The first type stores the input data and final execution output, such as the FFT points, input matrixes, etc. The second storage type is basically used for storing intermediate results, check-pointing or for volatile input sets.  I/O storage needs to be stored in a centralized place so that all threads or processes in a cluster can have unconditional access to it.  I/O reads or writes take place in bursts. All processes read input data sets almost at the same time and write output also concurrently, assuming good load balancing. This mandates storage devices with very high bandwidth to satisfy many requests at the same time. From my observations, most HPC applications ask for relatively large chunks of data in every I/O attempt, which would reduce the effect of read or write latency on these devices.

I see most cloud systems provide the conventional physically centralized storage devices connected to a high speed interconnection. This architecture might be a good one if the whole HPC system is working on a single problem at a given time. However, if multiple applications are using resources through a cloud, this physical architecture may need to be rethought. Distributed rack-aware file systems, such as Hadoop Distributed File System (HDFS), might be a very good option in some cases. Building multiple storage devices and attaching each one to a few racks or a cabinet is another excellent option. It will match the HPC applications utilizing the cloud architecture; each application will use one or few racks. It makes sense to place storage near the processors. I think possibilities are many and may need a separate article, so I will come back to that later.

The scratch storage by default should be local to each processor. Most HPC architectures provide such scratch storage spaces. Each rack would have one or more hard disks to quickly store and retrieve scratch data. This scratch data is volatile and usually get erased when application execution ends. I think the best reconsideration is replace these hard disks with the new SSD to save power and speed up execution since accessing them might be quite frequent.

Networking

Using the cloud model, there are three sources of network traffic: (1) Remote user communication, (2) I/O, and (3) Inter-process communication. Remote user communication takes place when large data sets are being sent or received from a remote site. End user usually prepares the input or retrieves the results. It can be optimized again by distributing storage to different NAS devices. However, utilizing systems such as Hadoop Distributed File System (HDFS) may not be the optimum solution if users are reading and writing large chunks of data in most of their HPC applications. 

This architecture will overload the internal interconnection and compute nodes as well. Inter-processor communication, on the other hand, is characterized by high frequency and small data chunks. Latency in this case is a very important factor. In addition to low latency networking equipment, this bottleneck can be easily avoided by placing virtual nodes as close as possible to each other, on the same physical node if possible.

Thus far, I have tried to pinpoint some of the qualitative aspects of resources usage patterns. Scheduling and virtualizing resources same way done for general purpose applications, I think will produce disappointing results. Cloud infrastructure is still lucrative if comparing its economics to building in-house HPC machines. However, cloud for HPC has to be efficient enough to reach proper performance ceilings without disappointing customers who probably experienced at a certain point to run their HPC applications on dedicated machines.

Subsequent articles, which will be featured here as part of a continued series, will discuss some of my findings in characterizing resources usage of specific HPC applications, such as BLAST, DGEMM, FFT, etc., using the cloud infrastructure. 

About the Author

Mohamed Ahmed is an assistant professor at the department of computer science and engineering of the American University in Cairo (AUC). He got his BS and MSc from the AUC. He received his PhD from the University Of Connecticut (UCONN). During his masters he was one of the early researchers who built a component-based operating system using object oriented technologies.  He decided to move to the wild world of high performance computing (HPC) working in different sub-domains, such as performance engineering, HPC applications, and cloud computing for HPC systems.

Dr. Mohamed has one provisional patent and several peer-reviewed publications in operating systems engineering, reliability, threading models, and programming models. Dr. Mohamed’s research interests basically fall under HPC. His current focus is in utilizing multi-/many-core microprocessors in massively parallel systems. One of his objectives is to make HPC systems available for both researchers in other science domains and industry in a faction of current cost of HPC infrastructure and ready to use in a very short time. He is currently working on porting applications and algorithms for biology, material sciences, and computational chemistry to new compute acceleration architectures such as GPGPUs.

For more, please see:
 
– http://www.cse.aucegypt.edu/~mahmed/

– http://MohamedFAhmed.wordpress.com/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

European HPC Summit Week and PRACEdays 2018: Slaying Dragons and SHAPEing Futures One SME at a Time

June 20, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened May 29, 2018. The conference was chaired by PRACE Council Vice-Chair Sergi Girona (Barcelona Super Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

An Overview of ‘OpenACC for Programmers’ from the Book’s Editors

June 20, 2018

In an era of multicore processors coupled with manycore accelerators in all kinds of devices from smartphones all the way to supercomputers, it is important to train current and future computational scientists of all dom Read more…

By Sunita Chandrasekaran and Guido Juckeland

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose primary use case is to support high IOPS rates to/from a scra Read more…

By John Russell

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Preview the World’s Smartest Supercomputer at ISC 2018

Introducing an accelerated IT infrastructure for HPC & AI workloads Read more…

Lenovo to Debut ‘Neptune’ Cooling Technologies at ISC

June 19, 2018

Lenovo today announced a set of cooling technologies, dubbed Neptune, that include direct to node (DTN) warm water cooling, rear door heat exchanger (RDHX), and hybrid solutions that combine air and liquid cooling. Lenov Read more…

By John Russell

European HPC Summit Week and PRACEdays 2018: Slaying Dragons and SHAPEing Futures One SME at a Time

June 20, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened May 29, 2018. The conference was chair Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose p Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

Xiaoxiang Zhu Receives the 2018 PRACE Ada Lovelace Award for HPC

June 13, 2018

Xiaoxiang Zhu, who works for the German Aerospace Center (DLR) and Technical University of Munich (TUM), was awarded the 2018 PRACE Ada Lovelace Award for HPC for her outstanding contributions in the field of high performance computing (HPC) in Europe. Read more…

By Elizabeth Leake

U.S Considering Launch of National Quantum Initiative

June 11, 2018

Sometime this month the U.S. House Science Committee will introduce legislation to launch a 10-year National Quantum Initiative, according to a recent report by Read more…

By John Russell

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Exascale USA – Continuing to Move Forward

June 6, 2018

The end of May 2018, saw several important events that continue to advance the Department of Energy’s (DOE) Exascale Computing Initiative (ECI) for the United Read more…

By Alex R. Larzelere

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Google Charts Two-Dimensional Quantum Course

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for Google’s John Martinis. At a presentation last week at the HPC User Forum in Tucson, Martinis, one of the world's foremost experts in quantum computing, emphasized... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This