Visit additional Tabor Communication Publications
November 02, 2009
Grid computing was born in academia and was originally designed to support scientific and research computing. In contrast, cloud computing has a business background and is designed to enable the delivery of scalable Web applications.
The BEinGRID project has looked into how Grid is appropriate for business use (and has run several successful business experiments proving this proposition), but what about looking at whether the cloud is useful for academia? Can it be effectively used to run scientific codes, such as those found in climate modelling, fluid dynamics or molecular physics simulations, which have traditionally required the use of supercomputers?
On the face of it, cloud services offer a compelling, simple and relatively cost-effective HPC proposition -- just pay for as many CPUs as you want, when you want them. Of course, the truth isn't quite as simple as that.
Take a look at the Google Cloud offering, App Engine. Users access App Engine through an API, which places quite a lot of restrictions on the code that can be run, including:
For full details, see the App Engine documentation. Note that there are different quotas for the free and billed service, and that it is possible to negotiate increases to the quotas.
This doesn't make scientific computing impossible, but it does put in place a lot of barriers. It would be interesting to see what could be achieved computationally, given the above restrictions, by an academic research group that chose to use App Engine. However, the bottom line is that Google App Engine is more suited to creating dynamic web applications, such as photo and document editing tools, than to processing long-lived scientific computations.
Amazon's offering, EC2, is a lot more promising. EC2 gives users much more access and control over the system through the use of virtualisation. Users are free to install whatever software and applications they need on EC2.
Users provide "virtual images", instances of which can be launched at any time and will normally be running in under 10 minutes. By default, only 20 instances (per region) can normally be launched, but users can apply to increase this limit, potentially allowing thousands of instances to be launched. Amazon also supports Hadoop, Condor and OpenMPI for batch/parallel processing. The Data Wrangling blog has some in-depth information on using Amazon to set up an MPI cluster.
For data storage, the Amazon S3 service can be used to store the enormous amounts of data produced by some scientific applications. Access to the data is controlled via Access Control Lists (ACLs) and data is encrypted during transmission using SSL. Users are encouraged to encrypt any sensitive data being stored in S3. It is important to note that Amazon does not guarantee that data will not be lost or compromised (see point 7.2 of the AWS Customer Agreement).
So, it should be fairly easy to get most scientific computing codes running in parallel on EC2. But what's the performance like? There has been some research and the results are mixed. Comparing a roughly equivalent amount of CPU resources, super-computer clusters are typically much faster at processing scientific codes, largely due to having a better interconnect (see this article by Edward Walker). However, if we include the amount of time it takes to get the code running (i.e., to request and boot the images on EC2 and to wait in the queue on a super-computer cluster), EC2 is likely to be faster in many cases, dependent on the size of the job and scheduling policies (as shown by Ian Foster on his blog). In the future, EC2 may offer an even more competitive service if they upgrade their systems.
I haven't taken into account Microsoft Azure, which is still in "Community Technology Preview" at the time of writing, but may be interesting for any .NET based scientific codes. The offering is similar to EC2, the main difference being that users must use a supplied Windows 2008 Server VM.
With all the money and time being invested in cloud computing, it will be interesting to the see the effect it has on HPC resource providers over the next decade. Will the emergence of cloud lead to a greater trust in outsourcing compute resources and a direct boost to HPC resource providers? Will there be a level of symbiosis where cloud resources can be built on top of or alongside HPC computing resources? Or will they just be direct competition? One things for sure; the rules of the HPC game are being questioned.
For more information on using cloud platforms for scientific computing, see the HPCcloud User Group.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.