StarCluster Brings HPC to the Amazon Cloud

By Justin Riley

May 18, 2010

Setting up an HPC cluster in the cloud can be a daunting task for new users looking to utilize the cloud to run their HPC applications. Learning the ins and outs of the infrastructure as a service (IaaS) model in addition to configuring and installing a typical HPC system is not an easy task.

In order to use the cloud effectively users need to be able to automate the process of requesting and configuring new resources and also terminate resources when they’re no longer required without losing data. These concerns can be a challenge even for advanced users and require some level of cloud programming in order to get it right. In an effort to improve this situation, the Software Tools for Academics and Researchers (STAR) group at MIT has created an open-source project called StarCluster that allows anyone to create and manage their own HPC clusters hosted on Amazon’s Elastic Compute Cloud (EC2) without needing to be a cloud expert.

StarCluster Configuration

One of StarCluster’s primary goals is to be simple to use and to hide as many of the cloud computing details from users as possible. When a new user attempts to use StarCluster for the first time an example configuration file is created that is ready to be used out-of-the-box. The user simply needs to fill in the EC2 account information and optionally customize the number of machines to use before he or she is ready to start a cluster. Starting a cluster with the example configuration will launch a two-machine cluster using the cheapest instance types available on EC2. This allows users to experiment with StarCluster for the first time without dramatic up-front costs.

The group of cluster-specific settings in the configuration file is known as a “cluster template”. StarCluster supports defining multiple cluster templates which can be used when launching a cluster. For example, it’s often useful to have separate templates for different cluster sizes such as a template that defines a small two-machine cluster and another template that defines a large ten-machine cluster. These templates can be specified at runtime to allow a variety of configurations to be used when starting a cluster.

Starting an HPC Cluster on EC2

Once the configuration file has been created, starting a cluster is as simple as running “starcluster start mynewcluster” at the command line. This command will first verify that all settings in the configuration file are valid and are likely to create a working system. Once the settings in the configuration file have been verified, the “start” command creates a new cluster based on these settings with a tag-name of “mynewcluster” on EC2.

Once the “start” command has finished the user can login to the “master” machine as root by running “starcluster sshmaster mynewcluster”. At this point the user has the (root) keys to the cluster just as they would with their own local resources.

StarCluster also has the ability to create multiple HPC clusters. Running the same “start” command again with a different tag-name will launch another HPC cluster in the cloud using the same settings as the previous run. If you’ve defined additional cluster templates in the configuration file these can optionally be used to specify a different group of settings to use when starting the next cluster.

Once the user has finished using a cluster they simply specify its tag-name to StarCluster’s “stop” command to shut it down. For the “mynewcluster” example above the command would be “starcluster stop mynewcluster”. The “stop” command will shutdown the entire cluster and terminate the billing period.

Automated HPC Cluster Configuration

StarCluster automatically configures each machine with the appropriate networking settings needed to communicate with the rest of the cluster. On top of this, StarCluster also fully configures password-less SSH communication for both the root user and a normal user on the cluster. Password-less SSH allows a user to login remotely between machines in the cluster without using a password. This is useful when administering the machines in the cloud and is also a necessary requirement for OpenMPI communication.

Most clusters usually have some form of a queuing system for submitting and load-balancing many computationally intensive tasks or “jobs” and StarCluster is no exception. Out-of-the-box, StarCluster installs and configures the open-source version of the Sun Grid Engine (SGE) queuing system for running distributed and parallel jobs on the cluster. A parallel queue is also configured by default that enables SGE to monitor and account for parallel tasks that use more than one machine in a single job.

Many parallel tasks are commonly written using the Message Passing Interface (MPI). For MPI users, StarCluster includes an SGE-aware OpenMPI installation that provides tight integration between the SGE job scheduler and MPI applications. This integration removes the need for users to specify a list of hosts to use when running an MPI job. Rather, OpenMPI will automatically fetch the host info it needs directly from SGE and begin execution. This allows all machines involved in the MPI calculation to be correctly accounted for by the queuing system.

Sharing files between machines without manually copying files around is a requirement for most HPC systems. Typically this is done using a shared folder via the network file system (NFS). StarCluster automatically configures /home on each “worker” machine of the cluster to be NFS-shared from the “master” machine. This allows users to see their files on any machine in the cluster and also provides a globally accessible place for jobs to read input data and write their finished results.

The StarCluster Amazon Machine Image (AMI)

Amazon Machine Images are used by EC2 to load an entire operating system along with various applications, libraries, and data onto a newly requested virtual machine. Machine images are publicly available for just about any Linux distribution, Solaris, and even Microsoft Windows. New images can be created with custom software configurations by launching a new virtual machine from an existing AMI, installing your new software, and then running an AMI creation process on the machine to create a new AMI.

StarCluster comes with a publicly available custom-tailored AMI, in both 32bit and 64bit flavors, that contains the entire OS and software configuration needed for an HPC cluster on Amazon. The StarCluster AMI is Ubuntu Linux 9.10 based and includes the Sun Grid Engine queuing system (open-source edition), the network file system, and OpenMPI along with common development tools and libraries to compile new software from source. The StarCluster AMI also includes a custom-compiled installation of the Automatically Tuned Linear Algebra Subroutines (ATLAS) and Linear Algebra PACKage (LAPACK) libraries that have been optimized for the larger high-CPU instance types on EC2. For numerical python users, the AMI contains both NumPy and SciPy installations that have been custom compiled against the optimized LAPACK/ATLAS installations. These optimized libraries provide a significant performance improvement when running linear algebra routines in the cloud.

Of course, StarCluster does not limit you to only these software installations. The StarCluster AMIs can easily be extended with your own software to create a brand-new AMI tailored for a specific need. To simplify the AMI creation process StarCluster provides a “createimage” command that will automatically create a new AMI from a running Amazon EC2 virtual machine in the cloud. This allows you to launch a single virtual machine, install your software, and easily create a new AMI from this machine. Using a new customized AMI with StarCluster is as simple as updating the configuration file with the new AMI’s identifier.

Using EBS Volumes for Persistent Storage

Amazon also provides a service called Elastic Block Storage (EBS) which allows users to create virtual block storage volumes that are similar in functionality to a USB pen-drive. These volumes can be anywhere from 1GB to 1TB in size and can be attached to a single virtual machine in the cloud at a time. The benefit of using these volumes is that any data written to EBS is automatically stored and persisted in the cloud even after all virtual machines have been terminated. This means the next time you start a cluster and attach the EBS volume, all of your data will be available as it was the last time you launched a cluster. Another benefit of using EBS volumes is that they’re easy to snapshot and duplicate which allows for backing up large amounts of data in the cloud.

StarCluster has the ability to utilize Amazon’s EBS volumes to provide persistent data storage for a given cluster. To use EBS with StarCluster you must first create an EBS volume. For new users, this process is simplified by using StarCluster’s “createvolume” command. This command automates the process of creating, partitioning, and formatting a new EBS volume.

Using a new volume with StarCluster involves adding additional volume settings to the configuration file. These settings specify the volume to use and the location on the cluster’s file system to attach the volume. This file system location is then NFS-shared from the “master” machine to all “worker” machines. StarCluster does not limit you to using a single EBS volume. Multiple EBS volumes can be configured, attached, and shared on the cluster. This allows up to several terabytes of data to be stored on the cluster.Getting Started with StarCluster

StarCluster is open-source software and can be downloaded for free from the StarCluster website at http://web.mit.edu/starcluster or from the Python Package Index (PyPI) at http://pypi.python.org/pypi/StarCluster.

UPDATE: We now have a video screencast of StarCluster in action that can be viewed here.

About the Author

Justin Riley is a software developer for the Software Tools for Academics and Researchers (STAR) group at the Massachusetts Institute of Technology (MIT). The STAR group seeks to bridge the divide between scientific research and the classroom by collaborating with faculty from MIT and other educational institutions to design software that explores core scientific research concepts. The STAR group works out of the Office of Educational Innovation and Technology (OEIT) under the Dean for Undergraduate Education (DUE) at MIT.

Justin has been developing with the Amazon cloud for the past three years and has successfully used the cloud to support the “Introduction to Modeling and Simulation” and “Intro to Parallel Programming for Multicore Machines using OpenMP and OpenMPI” courses at MIT. His work with StarCluster came directly from the need to provide a sustainable solution to the issues associated with bringing computational resources into the classroom. Justin created StarCluster to automate the process of locating, configuring, and maintaining computational resources without needing to be a 24/7 system administrator and without having to make a physical appearance to address potential hardware and software issues.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visitors to the Colorado Convention Center in Denver for the larg Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some cases, city managers didn’t even know existed. Speaking Read more…

By Doug Black

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

SC17 Student Cluster Competition Configurations: Fewer Nodes, Way More Accelerators

November 16, 2017

The final configurations for each of the SC17 “Donnybrook in Denver” Student Cluster Competition have been released. Fortunately, each team received their equipment shipments on time and undamaged, so the teams are r Read more…

By Dan Olds

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

HPE Launches ARM-based Apollo System for HPC, AI

November 14, 2017

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-base Read more…

By Doug Black

OpenACC Shines in Global Climate/Weather Codes

November 14, 2017

OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This