Earlier today AMD met with its closest partners to discuss the future of supercomputing and the challenges associated with exascale computing.
Gathered at the Four Seasons Hotel in San Francisco, members of the high-ranking industry panel, which included executives from AMD and Cray, took turns pointing out the perceived barriers to exascale. The usual suspects emerged: power requirements, parallel programming obstacles, hardware failure rates, and so forth.
The group also weighed in on the subject of HPC-as-a-Service, leveraging the cloud to run compute-intensive applications, as reported in a news item from V3.co.uk.
According to Chuck Moore, AMD corporate fellow and technology group chief technology officer, the current cloud platform model is not a good match for next-generation supercomputing.
“You tend to write applications that spread out among many systems and come back with a result,” Moore said of the cloud paradigm.
“While certain types of HPC spread work out, they do so with a very different set of latency and constraints and thinking, it is not like you can just pick up that application and run it on a cloud.”
While it’s true there are some limitations to the HPC cloud introduced by the extra software layer, there are also benefits of an on-demand model, namely scalability and only paying for what you need. Not all HPC applications will be suited for a distributed computing model, but many are.
As for that exascale timeframe, participants responses were mixed, with 2020 cited at the outside. Of course, by then, many of the barriers to HPC-as-a-Service that Moore mentions will be resolved.