Challenges Ahead for HPC Applications in the Cloud

By Dr. Mohamed Ahmed

August 16, 2010

High performance computing is known for its ability to accelerate scientific experiments and discovery through modeling and simulation. The complexity of the mathematical models and huge amounts of data that must be processed in a short time mandates the use of high throughput hardware infrastructure and optimized software stack. HPC applications very quickly consume processing power, memory, storage, and network bandwidth for a relatively short time. Other distributed systems utilize the aggregated power of the cluster but do not necessary utilize all available resources as aggressively in such a short amount of time as HPC applications do.

In essence, this is why cloud services and infrastructure may make a perfect sense to most businesses. It is expected that 80% of the general purpose applications will be hosted in clouds by year 2020. However, for HPC, there needs to be a deeper analysis of how HPC users can make use of cloud architectures.

So, the main objective of cloud computing is to allow end users to plug their applications into virtual machines in a manner quite similar to hosting them on physically dedicated machines. Users should be able to access and manage this infrastructure exactly the same way they would if they had the physical machines on-premises.

In HPC, applications are developed to deal with large numbers of compute nodes, with relatively large memories and huge storage capacities. Keep in mind that cloud services can be provided at two levels: (1) cloud infrastructure, or (2) cloud-hosted Applications. The first type targets advanced users who would like to utilize cloud infrastructure to build their own proprietary software serving their specific needs. The second type targets users who would like to use readymade applications running on top of the cloud infrastructure without digging into the details of the virtualized resources exposed by the cloud, such as storage, processing, interconnection, etc.

In this article I will be focusing mainly on the virtualization of cloud infrastructure and usage pattern of its recourses. I’ll briefly touch upon possible HPC applications that can be offered through cloud Infrastructure and characterize their utilization of resources in such infrastructure.

Before digging into how the cloud infrastructure can expose it services to HPC users, let’s focus first on the building unit: a virtualized node. Node virtualization is not a straight forward task for HPC usage patterns. Let me go with you through the possible usage patterns. It may appear as a low level analysis, but I think this will give us a deeper understanding of what is actually required to build HPC in the cloud.

I’ll be discussing processing, memory, storage, and network usage patterns. I’ll try to uncover also some of the overall required policies and mechanism for resources management and scheduling. This is very critical aspect in providing the appropriate services to HPC users through the cloud.

Processing Patterns
 
HPC applications are, to a great extent, scientific algorithms focusing on simulating mathematical models in earth science, chemistry, physics, etc. In addition to the main objective of utilizing the aggregated processing power of large HPC clusters, these applications focus also on utilizing micro resources inside each processor, especially with multi-core processors. Utilizing multi-threading for a fine-grained parallelism is a very critical component in speeding up these applications. Also, these applications utilize even more specific processor features such as the pipeline organization, branch prediction, and instructions prefetching to speedup execution.

The other family of HPC applications is based on combinatorial algorithms, such as graph traversal, sorting, and string matching. These algorithms utilize basically integer units inside the microprocessor. However, they still utilize multi-threading capabilities inside each compute node to speedup execution.

In general purpose and business-oriented applications, multi-threaded models might be utilized. However, multi-threaded models are deployed to serve high level requests, such as different database transactions in order to gain execution speedup. Threads can be easily mapped to virtual processors and scheduled by the OS to the physical processors.

It is quite challenging to manage virtualization of processors if accelerators are provided in the cloud, such as the Cell processor or GPGPUs. Each process may utilize one or more GPUs to accelerate some compute intensive parts, or kernels. The question is: how can we virtualize and schedule these accelerators? Are they going to accessible directly through hosted applications? Or a lightweight virtualization mechanism responsible mainly for scheduling and accounting the accelerators? Some research efforts such as GViM and GFusion are actively working on the area of accelerators virtualization.

Memory

Most HPC applications swing between memory intensity and arithmetic intensity. The more floating point operations (flops) required per one byte accessed from system’s main memory, the higher the application’s arithmetic intensity, and vice versa. The key here is not only the amount of memory required in a single virtualized node. It is also about the usage patterns related to processing requirements. HPC applications usually use memory in a very demanding pattern. The better the algorithm is designed, the peak bandwidth is maintained most of the algorithm’s execution life time.

Furthermore, as arithmetic intensity decreases,more pressure is added over the memory system. The processor is spending less time computing and more time moving data to or from system’s memory. Also, advanced HPC developers often times consider physical properties of the memory system to maximize bandwidth, such as number of banks, size of memory controller buffer, latency, maximum bandwidth, etc.

I think standard virtualization abstracts all these hardware properties and consider the standard memory usage pattern, i.e. small requests that do not form stream of data movement. I also believe that some good research in the area of memory abstraction can be done. Hypervisors need to consider multiplexing the physical memory in a way that would maintain most of its physical properties. This should give more space for memory performance optimization.   

Storage

HPC applications need two types of permanent storage: (1) I/O, and (2) Scratch storage. The first type stores the input data and final execution output, such as the FFT points, input matrixes, etc. The second storage type is basically used for storing intermediate results, check-pointing or for volatile input sets.  I/O storage needs to be stored in a centralized place so that all threads or processes in a cluster can have unconditional access to it.  I/O reads or writes take place in bursts. All processes read input data sets almost at the same time and write output also concurrently, assuming good load balancing. This mandates storage devices with very high bandwidth to satisfy many requests at the same time. From my observations, most HPC applications ask for relatively large chunks of data in every I/O attempt, which would reduce the effect of read or write latency on these devices.

I see most cloud systems provide the conventional physically centralized storage devices connected to a high speed interconnection. This architecture might be a good one if the whole HPC system is working on a single problem at a given time. However, if multiple applications are using resources through a cloud, this physical architecture may need to be rethought. Distributed rack-aware file systems, such as Hadoop Distributed File System (HDFS), might be a very good option in some cases. Building multiple storage devices and attaching each one to a few racks or a cabinet is another excellent option. It will match the HPC applications utilizing the cloud architecture; each application will use one or few racks. It makes sense to place storage near the processors. I think possibilities are many and may need a separate article, so I will come back to that later.

The scratch storage by default should be local to each processor. Most HPC architectures provide such scratch storage spaces. Each rack would have one or more hard disks to quickly store and retrieve scratch data. This scratch data is volatile and usually get erased when application execution ends. I think the best reconsideration is replace these hard disks with the new SSD to save power and speed up execution since accessing them might be quite frequent.

Networking

Using the cloud model, there are three sources of network traffic: (1) Remote user communication, (2) I/O, and (3) Inter-process communication. Remote user communication takes place when large data sets are being sent or received from a remote site. End user usually prepares the input or retrieves the results. It can be optimized again by distributing storage to different NAS devices. However, utilizing systems such as Hadoop Distributed File System (HDFS) may not be the optimum solution if users are reading and writing large chunks of data in most of their HPC applications. 

This architecture will overload the internal interconnection and compute nodes as well. Inter-processor communication, on the other hand, is characterized by high frequency and small data chunks. Latency in this case is a very important factor. In addition to low latency networking equipment, this bottleneck can be easily avoided by placing virtual nodes as close as possible to each other, on the same physical node if possible.

Thus far, I have tried to pinpoint some of the qualitative aspects of resources usage patterns. Scheduling and virtualizing resources same way done for general purpose applications, I think will produce disappointing results. Cloud infrastructure is still lucrative if comparing its economics to building in-house HPC machines. However, cloud for HPC has to be efficient enough to reach proper performance ceilings without disappointing customers who probably experienced at a certain point to run their HPC applications on dedicated machines.

Subsequent articles, which will be featured here as part of a continued series, will discuss some of my findings in characterizing resources usage of specific HPC applications, such as BLAST, DGEMM, FFT, etc., using the cloud infrastructure. 

About the Author

Mohamed Ahmed is an assistant professor at the department of computer science and engineering of the American University in Cairo (AUC). He got his BS and MSc from the AUC. He received his PhD from the University Of Connecticut (UCONN). During his masters he was one of the early researchers who built a component-based operating system using object oriented technologies.  He decided to move to the wild world of high performance computing (HPC) working in different sub-domains, such as performance engineering, HPC applications, and cloud computing for HPC systems.

Dr. Mohamed has one provisional patent and several peer-reviewed publications in operating systems engineering, reliability, threading models, and programming models. Dr. Mohamed’s research interests basically fall under HPC. His current focus is in utilizing multi-/many-core microprocessors in massively parallel systems. One of his objectives is to make HPC systems available for both researchers in other science domains and industry in a faction of current cost of HPC infrastructure and ready to use in a very short time. He is currently working on porting applications and algorithms for biology, material sciences, and computational chemistry to new compute acceleration architectures such as GPGPUs.

For more, please see:
 
– http://www.cse.aucegypt.edu/~mahmed/

– http://MohamedFAhmed.wordpress.com/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

PRACEdays Reflects Europe’s HPC Commitment

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at t Read more…

By Tiffany Trader

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurr Read more…

By Doug Black

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Nvidia CEO Predicts AI ‘Cambrian Explosion’

May 25, 2017

The processing power and cloud access to developer tools used to train machine-learning models are making artificial intelligence ubiquitous across computing pl Read more…

By George Leopold

HPE Extreme Performance Solutions

Exploring the Three Models of Remote Visualization

The explosion of data and advancement of digital technologies are dramatically changing the way many companies do business. With the help of high performance computing (HPC) solutions and data analytics platforms, manufacturers are developing products faster, healthcare providers are improving patient care, and energy companies are improving planning, exploration, and production. Read more…

PGAS Use will Rise on New H/W Trends, Says Reinders

May 25, 2017

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly kn Read more…

By James Reinders

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Hedge Funds (with Supercomputing help) Rank First Among Investors

May 22, 2017

In case you didn’t know, The Quants Run Wall Street Now, or so says a headline in today’s Wall Street Journal. Quant-run hedge funds now control the largest Read more…

By John Russell

IBM, D-Wave Report Quantum Computing Advances

May 18, 2017

IBM said this week it has built and tested a pair of quantum computing processors, including a prototype of a commercial version. That progress follows an an Read more…

By George Leopold

PRACEdays Reflects Europe’s HPC Commitment

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at t Read more…

By Tiffany Trader

PGAS Use will Rise on New H/W Trends, Says Reinders

May 25, 2017

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly kn Read more…

By James Reinders

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Cray Offers Supercomputing as a Service, Targets Biotechs First

May 16, 2017

Leading supercomputer vendor Cray and datacenter/cloud provider the Markley Group today announced plans to jointly deliver supercomputing as a service. The init Read more…

By John Russell

HPE’s Memory-centric The Machine Coming into View, Opens ARMs to 3rd-party Developers

May 16, 2017

Announced three years ago, HPE’s The Machine is said to be the largest R&D program in the venerable company’s history, one that could be progressing tow Read more…

By Doug Black

What’s Up with Hyperion as It Transitions From IDC?

May 15, 2017

If you’re wondering what’s happening with Hyperion Research – formerly the IDC HPC group – apparently you are not alone, says Steve Conway, now senior V Read more…

By John Russell

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

HPE Launches Servers, Services, and Collaboration at GTC

May 10, 2017

Hewlett Packard Enterprise (HPE) today launched a new liquid cooled GPU-driven Apollo platform based on SGI ICE architecture, a new collaboration with NVIDIA, a Read more…

By John Russell

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

Since our first formal product releases of OSPRay and OpenSWR libraries in 2016, CPU-based Software Defined Visualization (SDVis) has achieved wide-spread adopt Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Last week, Google reported that its custom ASIC Tensor Processing Unit (TPU) was 15-30x faster for inferencing workloads than Nvidia's K80 GPU (see our coverage Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a ne Read more…

By Tiffany Trader

Leading Solution Providers

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which w Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling Read more…

By Steve Campbell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Eng Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

As China continues to prove its supercomputing mettle via the Top500 list and the forward march of its ambitious plans to stand up an exascale machine by 2020, Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu's Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural networ Read more…

By Tiffany Trader

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of "quantum supremacy," researchers are stretching the limits of today's most advance Read more…

By Tiffany Trader

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" process Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This