Visit additional Tabor Communication Publications
November 14, 2011
For the first time since the TOP500 group began publishing their list of the fastest computers in the world, there was no turnover in the top 10 machines. In fact, the only change at the top was the new record Linpack mark set by the now fully deployed K Computer at RIKEN. After installing the remaining racks over the last six months, Fujitsu expanded that system's performance from 8.16 to 10.51 petaflops, a value that exceeds the next seven fastest systems combined.
The current top of the list looks like this:
But despite the inactivity at the top, aggregate performance for the whole list ticked up quite respectably from 58.7 petaflop to 74.2 petaflops over the last six months. And, as is usually the case, the bottom of the list experienced lots of turnover. The new entry point into the list is now 50.94 teraflops, more than 10 teraflops more than was required in June. In fact the new number 500 system, an HP Proliant cluster for a "IT Service Provider," was sitting at number 301 just six months ago.
From a geographic point of view, the US still owns the majority of the 500 fastest supercomputers, with 263, adding 8 more systems since June. In second place is China with 74, having added 13 new machines over the last six months. Japan (30), the UK (27), France (23) Germany (20), Canada (9), Poland (6), Russia (6), and Australia (4) round out the top 10.
GPUs are continuing to gain share at a health clip, with a total of 37 systems now sporting the graphics chips -- about twice as many as there were in June. NVIDIA parts are in 35 of these systems, while AMD (ATI) GPUs managed just two appearances.
But the x86 CPU is still the king of HPC. Intel Xeons processors have the lion's share of the CPUs on the tops supers, appearing in 76.8 percent of the systems. The majority of those are Westmere-EP (Xeon 5600 series) processors. AMD Opterons represent just 12.6 percent of the total, while IBM Power chips, of one sort or another, are in 9.2 percent of the machines.
Perhaps the most discouraging trend is that of power consumption at the top of the list. Aggregate wattage for the fastest 10 machines is 4.56 MW, up slightly from 4.3 MW in June. More importantly, the average power efficiency of the elite supercomputers is 464 megaflops/watt, the same as it was six months ago.
For those looking forward to exascale supercomputers before the end of this decade, that's rather disheartening. There is a general consensus that exaflop machines should consume no more than 20 MW, which translates into 50 gigaflops/watt. Not only are current power efficiencies two orders of magnitude off, but for the time being at least, progress in this area seems to have stopped. As plans for exascale computers start to solidify in the next few years, look for this TOP500 metric to come under increasing scrutiny.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.