HPC2014: From Clouds and Big Data to Exascale and Beyond

By Dr. Jose Luis Vazquez-Poletti

July 14, 2014

The International Advanced Research Workshop has become leading reference after being celebrated for more than 20 years. You can blame Lucio Grandinetti (Full Professor at University of Calabria) and his team, who started all of this and managed to put together, year after year, the best in the computing think tank.

This edition’s motto was very tempting: “From Clouds and Big Data to Exascale and Beyond”. Needless to say that the quality of the talks given was very high, however I would like to share with you my own selection.

The workshop couldn’t start better with the insights from Jack Dongarra, Ian Foster and Geoffrey Fox. Dongarra explained how HPC has changed over the last 10 years and how should we prepare for the next leap. For instance, he proposed some specific modifications to the Linpack Benchmark, so I guess the Top500 competition gets even more interesting in the future.

Science in a specific area can be done with materials from previous processes in other areas. Foster identified the overall process as “networking materials data” and explained that it consists mainly in: publishing and discovering data; linking instruments, computations and people; and organizing the existing software in order to facilitate understanding and reuse.

And talking about reuse or at least readapt, Fox suggested that HPC should be unified with the Apache software stack, which is already well used in cloud computing. After a reformulation of the famous Berkeley dwarfs and NAS parallel in a “big data style”, a high performance Java (Grande) runtime was proposed.

The industry has much to say. Frank Baetke brought the voice of HP and showcased the current HPC portfolio. Their SL-series will see a great improvement with a new GPU and coprocessor architectures, but without paying attention to power and cooling efficiency, allowing extended energy recovery rates.

David Pellerin highlighted the importance of HPC in the cloud for research computing in the most recent past and how it has enabled the convergence of big data analytics. Scalability in the cloud provides large amounts of HPC power, but also requires some thinking on aspects such as application fault-tolerance, cluster right-sizing and data storage architectures. He provided some use cases with the “AWS HPC seal of quality”.

HPC is not only about general purpose machines. This is the case of Anton, a massively parallel special-purpose machine that accelerates molecular dynamics simulations by orders of magnitude compated with the previous state of the art. Mark Moraes explained the interesting challenges behind its operation and how they were tackled at software level, along with valuable lessons for achieving efficient scalling.

Thomas Sterling brought a revolutionary proposition: the avoidance of basic logic, storage and communication building blocks. He stated that current architectures are dominated by traditional forms and assumptions inherited from the von Neumann age. If we want to move to the next level, we have to adopt advanced strategies and technologies (cellular architecture, processor in memory, systolic arrays…). And, if his proposition wasn’t enough, Sterling came up with the anticipated limitations, imposed by fundamental Physics, of the so-called “Neo-Digital age”.

Moving back to cloud, Dana Petcu explained how heterogeneity could be good and bad at the same time. It favors the cloud service providers allowing them to be competitive in a very dynamic market specially by exposing unique solutions. On the other hand, it hinders the interoperability between services and application portability. Petcu discussed four existing approaches in which she has been involved: mOSAIC for uniform interfaces, MODAClouds for domain specific languages, SPECS for user’s quality of experience and HOST for the usage of cloud HPC services.

Wolfgang Gentzsch gave an overview of these 2 years of his famous UberCloud Experiment. In fact, it was officially announced at the previous edition of the workshop (Tom Tabor himself helped in the crafting of the announcement and Geoffrey Fox was the first to register). I had the honour to participate in the first wave of experiments and the success of this project (152 experiments and over 1.500 organizations!) can be justified by the hard work of the organizers.

Tracking and managing big data is a big data problem by itself. This was the starting point for the Digital Asset Management System presented by Carl Kesselman, which allows increasing the time assigned to the knowledge extraction process. The architecture of the system (SaaS), “the iPhoto of big data” according to Kesselman, was explained along with an interesting biomedical science use case.

By the way, I contributed to the workshop too. This year I presented two use cases involving “clouds for clouds”, that is cloud computing for meteorology. In particular, I explained how the efforts done in the context of Martian atmospheric research are giving benefits to two Earth’s specific areas: the cost optimization of weather forecasting in Spain and the proper scaling of agricultural weather sensor networks processing in Argentina.

Cetraro’s International Advanced Research Workshop did it again. Considering the quality of contributions and taking into account Grandinetti’s words two years ago, “the workshop is evolving of Fine Arts”, I’m pretty sure that it’s evolving indeed… to the “Fine Arts of Cloud, HPC and Big Data”.

About the Author

Dr. Jose Luis Vazquez-Poletti is Assistant Professor in Computer Architecture at Complutense University of Madrid (UCM, Spain), and a Cloud Computing Researcher at the Distributed Systems Architecture Research Group (http://dsa-research.org/).

He is (and has been) directly involved in EU funded projects, such as EGEE (Grid Computing) and 4CaaSt (PaaS Cloud), as well as many Spanish national initiatives.

From 2005 to 2009 his research focused in application porting onto Grid Computing infrastructures, activity that let him be “where the real action was”. These applications pertained to a wide range of areas, from Fusion Physics to Bioinformatics. During this period he achieved the abilities needed for profiling applications and making them benefit of distributed computing infrastructures. Additionally, he shared these abilities in many training events organized within the EGEE Project and similar initiatives.

Since 2010 his research interests lie in different aspects of Cloud Computing, but always having real life applications in mind, specially those pertaining to the High Performance Computing domain.

JLBioPic

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This