Tag: graph 500
The world’s largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications–such as mining big data sets for connections–is a different sort of workload, and runs best on a different sort of computer.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/hat_trick_500.bmp” alt=”” width=”99″ height=”58″ />The semi-annual HPC “500 list” time and its attendant fall iron horse racing season are upon us. Thanks to the hard work of the list keepers, we currently enjoy three major ones to review, compare and contrast: TOP500, Green500 and Graph 500. Each focuses on a distinct aspect of HPC – number crunching, energy efficiency, and data crunching, respectively – and together they allow us to construct our own type of Triple Crown.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Convey_boards.jpg” alt=”” width=”98″ height=”85″ />Last week at SC12 in Salt Lake Convey pulled the lid off its MX big data-driven architecture designed to shine against graph analytics problems, which were at the heart of the show’s unmistakable data-intensive computing thrust this year. The new MX line is designed to exploit massive degrees of parallelism while efficiently handling hard-to-partition big data applications.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/bigdatagraphic_132x.jpg” alt=”” width=”75″ height=”105″ />Big data is all the rage these days. It is the subject of a recent Presidential Initiative, has its own news portal, and, in the guise of Watson, is a game show celebrity. Big data has also caused concern in some circles that it might sap interest and funding from the exascale computing initiative. So, is big data distinct from HPC – or is it just a new aspect of our evolving world of high-performance computing?
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/generic_lists.bmp” alt=”” width=”85″ height=”72″ />Since the release of the first TOP500 list in June of 1993, the HPC community has been motivated by the competition to place high on that list. We’re now approaching the twentieth anniversary of the TOP500. In recent years, two additional lists have gained traction: the Green500 and the Graph 500. Would a few more lists be useful?
When announced in 2006, the Cray XMT supercomputer attracted little attention. The machine was originally targeted for high-end data mining and analysis for a particular set of government clients in the intelligence community. While the feds have given the XMT support over the past five years, Cray is now looking to move these machines into the commercial sphere. And with the next generation XMT-2 on the horizon, the company is gearing up to accelerate that strategy in 2011.
If there was a dominating theme at the Supercomputing Conference this year, it had to be GPU computing.
Data-intensive applications are quickly emerging as a significant new class of HPC workloads. For this class of applications, a new kind of supercomputer, and a different way to assess them, will be required. That is the impetus behind the Graph 500, a set of benchmarks that aim to measure the suitability of systems for data-intensive analytics applications.
A short list of “can’t miss” sessions at this year’s Supercomputing conference.
HPC at Georgia Tech, PNNL is all atwitter.