High performance computing (HPC) customers love the breadth of services offered by AWS and the flexibility offered by the cloud to address their computational challenges. AWS provides you with the opportunity to innovate quickly and accelerate your workflow thanks to a virtually unlimited capacity. For more information, see the following posts:
- Natural language processing at Clemson University – 1.1 million vCPUs & EC2 Spot Instances
- Creating a 1.3 Million vCPU Grid on AWS using Amazon EC2 Spot Instances and TIBCO GridServer
- Western Digital HDD Simulation at Cloud Scale – 2.5 Million HPC Tasks, 40K EC2 Spot Instances
However, building an HPC system can be complex, which is one reason AWS created AWS ParallelCluster. This tool offers a simple method to create an automatically scaling HPC system in the AWS Cloud, utilizing services such as Amazon EC2, AWS Batch, and Amazon FSx for Lustre. AWS Parallel Cluster configures the compute and storage resources necessary to run your HPC workloads. Settings based on your specific storage, compute, and network requirements all help optimize its functionality.
You can manage an HPC cluster with AWS ParallelCluster through a simple set of commands: create, update, and delete. The compute portion of the cluster automatically scales when jobs have to be processed. As a result, you only pay for what you use. Additionally, to provide shared storage to the cluster, AWS ParallelCluster can use Amazon FSx for Lustre as a high-performance file system that can be accessed by all compute nodes.
In this post, I show how easy it is to spin up a fully functioning compute cluster with a shared, high-performance file system in a matter of minutes using AWS ParallelCluster and Amazon FSx for Lustre. To test file system throughput, which is a common performance metric for sizing HPC clusters, you’ll use the SGE scheduler (pre-installed with AWS ParallelCluster) once your cluster is up to run a quick I/O benchmark job. The I/O benchmark will also verify that the compute cluster can achieve the expected level of file system performance. If you’re new to AWSParallelCluster, review this post detailing how to get started before continuing.
AWS recently announced AWS ParallelCluster support for FSx for Lustre. AWS ParallelCluster is a cluster-management tool for AWS that makes it easy for scientists, researchers, and IT administrators to deploy and manage HPC clusters in the AWS Cloud. This tool also facilitates both quick-start proofs of concept (POCs) and production deployments.
FSx for Lustre provides a high-performance file system that makes it easy to process data stored on Amazon S3 or on premises. With FSx for Lustre, you can launch and run a file system that provides sub-millisecond access to your data. It allows you to read and write data at speeds of up to hundreds of gigabytes per second of throughput and millions of IOPS. By integrating AWS ParallelCluster with FSx for Lustre, HPC customers can now spin up clusters with thousands of compute instances, and shared storage that scales performance for the most demanding workloads so that compute instances never have to wait for data and you get to the answers faster.
To test the performance of your FSx for Lustre file system, you can use IOR, a commonly used HPC benchmarking tool. IOR provides a number of options that approximate workload performance. However, I always recommend that you benchmark with your own workload to get full visibility into performance characteristics.
Use IOR to generate a sequential read/write I/O pattern to test the high-throughput capabilities of FSx for Lustre. This means that you begin your sizing at the storage layer, and then determine how many compute nodes should drive your desired I/O performance.
When sizing FSx for Lustre performance, a good rule of thumb is to plan for 200 MB/s of throughput per 1 TiB allocated. FSx for Lustre allocates file systems in 3.6-TiB increments, which provide around 720 MB/s per increment. A 39.6-TiB file system, should give you around 7.92 GB/s of total file system throughput.
After you know the storage performance target, you can figure out how many compute nodes you need. Each compute node communicates with FSx for Lustre over the network, making the workload throughput-based, and so network bandwidth is the primary sizing factor for compute nodes.
Amazon EC2 instances provide different levels of network bandwidth, depending upon the instance type. In this case, I selected the c4.4xlarge instance type because it is widely available in all Regions and, in testing, provides about 500-MB/s peak network bandwidth. To reach 7.92 GB/s of performance, you want sixteen c4.4xlarge compute nodes in your cluster.
You must have the AWS CLI installed and configured on a system with Python and PIP. This could be on your laptop, a desktop computer, or an Amazon EC2 bastion host running in AWS.