Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

July 14, 2014

HPC2014: From Clouds and Big Data to Exascale and Beyond

Dr. Jose Luis Vazquez-Poletti

The International Advanced Research Workshop has become leading reference after being celebrated for more than 20 years. You can blame Lucio Grandinetti (Full Professor at University of Calabria) and his team, who started all of this and managed to put together, year after year, the best in the computing think tank.

This edition’s motto was very tempting: “From Clouds and Big Data to Exascale and Beyond”. Needless to say that the quality of the talks given was very high, however I would like to share with you my own selection.

The workshop couldn’t start better with the insights from Jack Dongarra, Ian Foster and Geoffrey Fox. Dongarra explained how HPC has changed over the last 10 years and how should we prepare for the next leap. For instance, he proposed some specific modifications to the Linpack Benchmark, so I guess the Top500 competition gets even more interesting in the future.

Science in a specific area can be done with materials from previous processes in other areas. Foster identified the overall process as “networking materials data” and explained that it consists mainly in: publishing and discovering data; linking instruments, computations and people; and organizing the existing software in order to facilitate understanding and reuse.

And talking about reuse or at least readapt, Fox suggested that HPC should be unified with the Apache software stack, which is already well used in cloud computing. After a reformulation of the famous Berkeley dwarfs and NAS parallel in a “big data style”, a high performance Java (Grande) runtime was proposed.

The industry has much to say. Frank Baetke brought the voice of HP and showcased the current HPC portfolio. Their SL-series will see a great improvement with a new GPU and coprocessor architectures, but without paying attention to power and cooling efficiency, allowing extended energy recovery rates.

David Pellerin highlighted the importance of HPC in the cloud for research computing in the most recent past and how it has enabled the convergence of big data analytics. Scalability in the cloud provides large amounts of HPC power, but also requires some thinking on aspects such as application fault-tolerance, cluster right-sizing and data storage architectures. He provided some use cases with the “AWS HPC seal of quality”.

HPC is not only about general purpose machines. This is the case of Anton, a massively parallel special-purpose machine that accelerates molecular dynamics simulations by orders of magnitude compated with the previous state of the art. Mark Moraes explained the interesting challenges behind its operation and how they were tackled at software level, along with valuable lessons for achieving efficient scalling.

Thomas Sterling brought a revolutionary proposition: the avoidance of basic logic, storage and communication building blocks. He stated that current architectures are dominated by traditional forms and assumptions inherited from the von Neumann age. If we want to move to the next level, we have to adopt advanced strategies and technologies (cellular architecture, processor in memory, systolic arrays…). And, if his proposition wasn’t enough, Sterling came up with the anticipated limitations, imposed by fundamental Physics, of the so-called “Neo-Digital age”.

Moving back to cloud, Dana Petcu explained how heterogeneity could be good and bad at the same time. It favors the cloud service providers allowing them to be competitive in a very dynamic market specially by exposing unique solutions. On the other hand, it hinders the interoperability between services and application portability. Petcu discussed four existing approaches in which she has been involved: mOSAIC for uniform interfaces, MODAClouds for domain specific languages, SPECS for user’s quality of experience and HOST for the usage of cloud HPC services.

Wolfgang Gentzsch gave an overview of these 2 years of his famous UberCloud Experiment. In fact, it was officially announced at the previous edition of the workshop (Tom Tabor himself helped in the crafting of the announcement and Geoffrey Fox was the first to register). I had the honour to participate in the first wave of experiments and the success of this project (152 experiments and over 1.500 organizations!) can be justified by the hard work of the organizers.

Tracking and managing big data is a big data problem by itself. This was the starting point for the Digital Asset Management System presented by Carl Kesselman, which allows increasing the time assigned to the knowledge extraction process. The architecture of the system (SaaS), “the iPhoto of big data” according to Kesselman, was explained along with an interesting biomedical science use case.

By the way, I contributed to the workshop too. This year I presented two use cases involving “clouds for clouds”, that is cloud computing for meteorology. In particular, I explained how the efforts done in the context of Martian atmospheric research are giving benefits to two Earth’s specific areas: the cost optimization of weather forecasting in Spain and the proper scaling of agricultural weather sensor networks processing in Argentina.

Cetraro’s International Advanced Research Workshop did it again. Considering the quality of contributions and taking into account Grandinetti’s words two years ago, “the workshop is evolving of Fine Arts”, I’m pretty sure that it’s evolving indeed… to the “Fine Arts of Cloud, HPC and Big Data”.

About the Author

Dr. Jose Luis Vazquez-Poletti is Assistant Professor in Computer Architecture at Complutense University of Madrid (UCM, Spain), and a Cloud Computing Researcher at the Distributed Systems Architecture Research Group (

He is (and has been) directly involved in EU funded projects, such as EGEE (Grid Computing) and 4CaaSt (PaaS Cloud), as well as many Spanish national initiatives.

From 2005 to 2009 his research focused in application porting onto Grid Computing infrastructures, activity that let him be “where the real action was”. These applications pertained to a wide range of areas, from Fusion Physics to Bioinformatics. During this period he achieved the abilities needed for profiling applications and making them benefit of distributed computing infrastructures. Additionally, he shared these abilities in many training events organized within the EGEE Project and similar initiatives.

Since 2010 his research interests lie in different aspects of Cloud Computing, but always having real life applications in mind, specially those pertaining to the High Performance Computing domain.


Share This