Bringing HPC in the Cloud Tutorials to Argentina
The first HPC in the cloud tutorial ever in Argentina took place on July 24th and July 25th at Mendoza during the HPCLatAm HPC School. Dr. Jose Luis Vazquez-Poletti shared his insights on this historic tutorial.
When Prof. Carlos Garcia Garino from Universidad Nacional de Cuyo (Mendoza, Argentina), whom I had the pleasure to meet at HPC 2012, offered me the opportunity to give a tutorial on HPC in the cloud, I didn’t realize the impact of this invitation at first.
Basically, this tutorial was to take place during the School that comes before the VI Latin American Symposium on High Performance Computing. In this School, different HPC related tutorials are given, ranging from CUDA and OpenMP basics to Scientific Computing with Python. About fifty students out of two hundred applicants from all over Latin-America attended the whole school.
The real challenge behind this HPC in the Cloud tutorial was that it was going to be the first ever in Argentina. On the other hand, I had the luck to count on the help of Professor Carlos Garcia Garino himself and his team (Dr. Cristian Mateos, Elina Pacini and Pablo Vargas).
Students came from different scientific backgrounds and degrees. We of course had computer scientists, but many were fundamental scientists that had worked with interesting computational problems such that they wanted to harness an HPC cloud infrastructure. That was our biggest advantage, as it is explained below.
The tutorial available time was sixteen hours divided into two days. The modules were structured as such:
– Day 1: Introduction to cloud computing. Private clouds.
– Day 2: Public clouds. HPC in the cloud.
Although there was much theory to be taught, specifically in the introductory module, the main idea was to rely on examples and hands-on sessions.
The private clouds module relied on OpenNebula. The students first learned the basics with a sandbox image that could be deployed in their own computers with VirtualBox. However, you can find it for other hypervisors at the marketplace. This image contains a simple and ready OpenNebula installation that allows the deployment of ttylinux machines via QEMU. As I explained to the students, “this is a clear Inception example, as it’s virtualization within virtualization”.
After learning the command line interface basics and how to use Sunstone (the web user interface), the students moved to an OpenNebula production installation at a local cluster, where they were able to expand their playground.
The public cloud module relied on Amazon Web Services and in particular, on EC2. Students learned how to deploy instances for installing given software byt setting up a very simple wiki system and creating EBS volumes for making their data persistent. All of this was conducted from the web control panel. Meanwhile, we could establish a parallelism of concepts between private and public clouds.
Finally, the HPC in the cloud module came. For this we used StarCluster, a solution for deploying computing clusters on Amazon EC2 in a very simple and dynamic way.
Then, Professor Victorio Sonzogni from Universidad Nacional del Litoral and CONICET (Argentina’s National Scientific and Technical Research Council) gave a talk to the students on a computational fluid dynamics problem that can be solved with OpenMPI. This gave them “the thrills” of working on a real life problem and see how the first interview with the final user is about.
I divided the students in two groups and made them create a cluster to execute a very simple OpenMPI application and then let them move to the CFD one. They expanded their cluster at will to see how an optimal solution by means of performance was reached.
The execution of their own codes was totally encouraged. I even requested that they “break the ice” with the rest of their colleagues at the HPC parallel tutorials in order to obtain more example codes. I guess this is the part of the tutorial they enjoyed the most, as they saw that they were able to easily provide an HPC solution to real problems thanks to the new technology they had just learned.
Almost at the end of this last session I made them realize that they spent more time on the code than in setting up the infrastructure, as this is the magic of the cloud: “the infrastructure is the one that adapts to the application”.
The feedback was awesome. The following days I was still receiving questions from my students and even from those who attended the other tutorials. This is a great sign for me, as it seems that we could manage to plant the seed of HPC in the cloud in the minds of “the next generation of Latin-American cloudshapers”.
About Jose Luis Vazquez-Poletti
Dr. Jose Luis Vazquez-Poletti is an Assistant Professor in Computer Architecture at Complutense University of Madrid (UCM, Spain), and a Cloud Computing Researcher at the Distributed Systems Architecture Research Group.
He is (and has been) directly involved in EU funded projects, such as EGEE (Grid Computing) and 4CaaSt (PaaS Cloud), as well as many Spanish national initiatives.
From 2005 to 2009 his research focused in application porting onto Grid Computing infrastructures, activity that let him be “where the real action was”. These applications pertained to a wide range of areas, from Fusion Physics to Bioinformatics. During this period he achieved the abilities needed for profiling applications and making them benefit of distributed computing infrastructures. Additionally, he shared these abilities in many training events organized within the EGEE Project and similar initiatives.
Since 2010 his research interests lie in different aspects of Cloud Computing, but always having real life applications in mind, specially those pertaining to the High Performance Computing domain.