The HPC Advisory Council, in conjunction with Stanford University, is hosting their conference and exascale workshop event this week. On the agenda are several themes that are tied to the next level of computing—from best practices to exploration of novel architectural features that will land us in the exascale era.
We spoke with a couple of those present this morning, including Gilad Shainer, Chair of the HPC Advisory Council (and VP of Marketing at Mellanox), who said that the emphasis on exascale hasn’t shifted, even if the timelines have over the last few years. He noted that many of the conversations there revolved around the present—notably the current pre-exascale system designs that we will be seeing emerge over the next couple of years.
Another attendee, Addison Snell, CEO of Intersect360 Research, remarked on a few other themes that were emerging in conversations and sessions. We spoke with him just after Intel’s Mark Seager presented on “The Challenges of Exascale Systems from an Applications Perspective” which highlighted the technologies that need to explored in terms of power consumption, memory, system software and application software in order to achieve practical exaflop scalability within the timeframes and targets set forward by the DoE.
“Although we use the word ‘exascale,’ it’s obvious that we mean exaflop. Talk of exascale is an acknowledgment that it is more complicated than stacking up 1,000 times more cores. With Japan already planting a stake in the ground that they will field an exascale system in 2020, Intel is interested in seeing the U.S. DoE join the race, though I think they have reasons for this beyond patriotism,” he said.
Another session that generated conversation today was D.K. Panda’s overview of exascale programming models. The Ohio State researcher drilled down into the needed evolution of current programming approaches that must take place before exascale-class computing can become a reality. Snell commented that their own research into the use cases for exascale, which is in partnership with the U.S. Council on Competitiveness, “software scalability has emerged as the dominant perceived barrier.” He argues that the community will “need to look at how MPI needs to evolve—or be replaced—with the architectures of the next 10 years.”
The agenda is peppered with several similar arguments across the spectrum; from approaching all elements in novel ways to ensure efficient exascale-level performance within efficiency targets. Snell says these conversations are what make the event unique and valuable. As he noted, “The HPC Advisory Council is emerging as the predominant, international, vendor-neutral conference series bringing together the vendor and user communities to present and discuss forward-looking issues in HPC.” This doesn’t mean the community won’t still gather at tradeshows like SC or ISC as showcases—rather, he says that “the HPC Advisory Council is where you can go and talk about the technologies, the standards and the work that is ongoing in order to keep real applications moving forward.”
If you’re in the Stanford area, you’re not too late—the event runs through the 5th. More info here: http://www.hpcadvisorycouncil.com/events/2014/stanford-workshop/agenda.php