Visit additional Tabor Communication Publications
December 08, 2011
Anyone who hasn't noticed the ascendency of China in high-tech has probably been sleeping in a cave since about 2005. Assuming you are at least a casual reader of HPCwire, then you're already well aware of the rise of Chinese supercomputing over the past few years. But it doesn't stop there. The country is determined to become a technology superpower.
Certainly China has been on a fast track to supercomputing stardom. Although still number two to the US in sheer numbers of supercomputers, the Asian nation currently has 74 systems on the TOP500 list, including the number 2 (Tianhe-1A) and number 4 (Nebulae) machines. Five years ago, they had just 18 such systems, and none in the top 10.
More recently, China designed and built the Sunway BlueLight MPP supercomputer, a petaflop-capable system, using home-grown CPUs. More indigenously produced HPC machines are on the way as companies like Lenovo and Dawning ramp up their penetration of the domestic market.
The larger story of China's high-tech rise is being taken up by the mainstream media. For example, the New York Times this week reported that China "will soon have the world’s largest domestic market for both Internet commerce and computing." That local market is driving innovation up and down the computer food chain.
Some of the innovation resembles that of Silicon Valley, where fast-growing startups and a workaholic culture are fueling a growing influx of venture capital-- $7.6 billion today, up from $2.2 billion in 2005. At the same time, Chinese patents are being issued at a breakneck rate, overtaking that of South Korea and Europe and catching up with the US and Japan.
But, as the NYT piece reports, some innovation there takes a different form. According to Clyde Prestowitz, president of the Economic Strategy Institute, in China, much of the new technology is based on continuous improvement, something, Prestowitz says, the US and Westerners are less adept at.
For example, two homemade Chinese CPUs -- the ShenWei SW1600 used in the Sunway BlueLight super, and the Godson-3B processor that will power an upcoming Dawning system -- are based on RISC designs originally developed in the US. But both chips, the NYT article points out, are among the most efficient in performance-per-watt, which is becoming the critical metric for supercomputing.
As IDC pointed out during a presentation last month at SC11, the Chinese are investing heavily in HPC, including the supercomputer centers themselves. Here the country intends to have a least 17 petascale-capable facilities within the next five years, which would rival that of the US and Europe.
None of this is escaping the notice of the HPC community. The Times article quotes Donna Crawford, the associate director of computation at the Lawrence Livermore National Laboratory, who notes, “The overall point of all of this is that the Chinese understand the importance of high-performance computing.”
That's not to say China is a high-tech utopia. They're still behind their competition on semiconductor technology (three generations, according to the NYT article). And the lack of intellectual property protection may discourage entrepreneurs looking to maximize profit from specific inventions.
But China has started churning out hardware and software engineers in tremendous numbers, some of which are being trained at the best engineering schools in the world, like UC Berkeley and MIT. It is these engineers that will form the next wave of Chinese tech innovators in their country. Let loose in the largest domestic technology market in the world, this next generation of techies may well create the next Silicon Valley.
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.