Visit additional Tabor Communication Publications
January 02, 2013
China has made no secret about becoming a world player in the supercomputing arena. With 72 machines on the latest TOP500 list, the Asian giant is second only to the United States in the number of top systems. Along the same line, China also vying to host the world's first exascale computer.
The country may be making a big step in that direction if the latest report from VR-Zone's Theo Valich turns out to be true. According to him, the Chinese government is planning to deploy a 100-petaflop supercomputer within the next 18 months, which would put the country on a extra-fast trajectory to exascale computing.
In fact, if China manages to field such a system in 2014, that would put it a year ahead of the TOP500's performance projection for a 100-petaflop machine, as well as a year ahead of China's original plans for a such a system. Last October, the Guangzhou Supercomputing Center was talking about the Tianhe-2 system, a 100-petaflop machine that would succeed China's current number one system, the Tianhe-1A. The Tianhe-2 was slated to be deployed in 2015 by China's National University of Defense Technology, but it's not clear from the VR-Zone piece if this new supercomputer is simply that system on a faster timeline or an entirely different machine.
Supposedly, this 2014 system is going to be based on Intel parts -- specifically 100,000 Xeon Ivy Bridge-EP CPUs paired with 100,000 Xeon Phi coprocessors. The coprocessors alone should be enough to supply all those FLOPS, given that even the 2013-era Phi parts deliver more than a teraflop apiece. The 100,000 Ivy Bridge Xeons would just be petaflop gravy.
According to Valich's (unnamed) source, the project's processors are expected to cost around $100 million. As he noted, that's likely to be well below the retail price for this hardware, given that both the high-end Xeon and Xeon Phi parts on the market today both run well above $1,000. As a result, Valich estimates the retail price for all this processing power would normally total over $500 million. Assuming this is all true, that would mean Intel won't even cover its cost for the silicon on the deal.
When asked by HPCwire to confirm or deny the report, Intel responded with "We don't comment on any rumors and speculations."
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.