Here is a collection of highlights from this week’s news stream as reported by HPCwire.
James River Technical Becomes Reseller for Bright Computing
Xilinx Stacked Silicon Interconnect Extends FPGA Technology
Oracle Makes Strategic Investment in Mellanox Technologies
IBM Analytics Software Used to Uncover Stroke Complications
Software Speeds Up the Processing of Gigapixel Images
Cray Lands $60 Million Contract from University of Stuttgart
Voltaire Fabric Collective Accelerator Available for Platform MPI
CAPS Adds HMPP Support for Windows HPC Server, Visual Studio
T-Platforms, U of Heidelberg to Develop New HPC Interconnect
Lawrence Livermore Expands Use of Rogue Wave Acumem Software
Teradata’s Updated Platform Family Sports Intel Processors, SSD Technology
Tsinghua University Deploys NAG Numeric Library
The Beagle Has Landed in Chicago
Berkeley Lab in Search for New Director of Computational Research Division
Chinese Introduce World’s Fastest Supercomputer
NVIDIA got the chance to beat its GPU computing drum today mere weeks before the SC10 conference takes place in New Orleans. China’s premier supercomputer, Tianhe-1A, powered by NVIDIA’s graphics chips (7,168 of them, in fact), made its official debut today at HPC 2010 China. The machine has set a new performance record of 2.507 petaflops, as calculated by the LINPACK benchmark. By that measure, Tianhe-1A is the fastest system in the world today, likely destined to grab the coveted number one spot on the next TOP500 list to be announced at SC.
This degree of speed was made possible by GPUs, specifically NVIDIA Fermi Tesla GPUs. The game-changing nature of these graphics processors is illustrated by the fact that GPUs now power two of the top three fastest computers in the world today; China’s “Dawning” system is currently sitting in the number two spot on the TOP500.
From the release:
Tianhe-1A epitomizes modern heterogeneous computing by coupling massively parallel GPUs with multicore CPUs, enabling significant achievements in performance, size and power. The system uses 7,168 NVIDIA Tesla M2050 GPUs and 14,336 CPUs; it would require more than 50,000 CPUs and twice as much floor space to deliver the same performance using CPUs alone.
More importantly, a 2.507 petaflop system built entirely with CPUs would consume more than 12 megawatts. Thanks to the use of GPUs in a heterogeneous computing environment, Tianhe-1A consumes only 4.04 megawatts, making it 3 times more power efficient — the difference in power consumption is enough to provide electricity to over 5000 homes for a year.
For deeper insight and a comparison of China’s preeminent system with the United State’s own fastest, Jaguar, be sure to check out HPCwire Editor Michael Feldman’s blog post.
According to Feldman, if the Chinese still hold their lead come next month’s TOP500 installment, this will be the first time that a non-US machine took the number one spot in six years, before which Japan’s Earth Simulator reigned from 2002 to 2004. Feldman also astutely points out that the US, Germany, and the UK currently have no GPU-equipped systems on the TOP500.
Cray’s Cascade System to Debut at University of Stuttgart
Cray marks another XE6 win this week. Cray and the University of Stuttgart announced Tuesday that Cray would provide a supercomputing system for the university’s High Performance Computing Center Stuttgart (HLRS). The contract has two phases, the delivery of a Cray XE6 supercomputer in 2011 and the future delivery of Cray’s next-generation supercomputer code-named “Cascade” due out the second half of 2013. Consisting of products and services, the deal is valued at more than $60 million (45 million euros) and follow-up expenses are estimated at $40 million (30 million euros) for maintenance and energy.
The supercomputer will strengthen the computing prowess of the entire European community. From Cray’s release:
The new Cray system at HLRS will serve as a supercomputing resource for researchers, scientists and engineers throughout Europe. HLRS is one of the leading centers in the European PRACE initiative and is currently the only large European high performance computing (HPC) center to work directly with industrial partners in automotive and aerospace engineering. HLRS is also a key partner of the Gauss Centre for Supercomputing (GCS), which is an alliance of the three major supercomputing centers in Germany that collectively provide one of the largest and most powerful supercomputer infrastructures in the world.
The announcment is short on system specs, such as core counts and FLOPS, but the nature of the Cascade system and a statement from the director of HLRS point in the direction of petaflops:
“HLRS and Cray have enjoyed a long history in ensuring our researchers and scientists are equipped with innovative supercomputing systems that are built with leading-edge supercomputing technology,” said Prof. Dr. Michael Resch, director of HLRS. “Cray is just the right partner as we enter the era of petaflops computing. Together with Cray’s outstanding supercomputing technology, our center will be able to carry through the new initiative for engineering and industrial simulation. This is especially important as we work at the forefront of electric mobility and sustainable energy supply.”
The highly-anticipated Cascade supercomputer will feature Cray’s Linux Environment, its HPC-optimized programming environment, and a next-generation interconnect chipset code-named “Aries,” which is a follow-on to Gemini. Additionally, Cascade will be Cray’s first capability supercomputer based on Intel x86 processors. Cray explains that Cascade was made possible in part by the Defense Advanced Research Projects Agency’s (DARPA) High Productivity Computing Systems program. Cascade’s novel design will likely set the stage for future exascale architectures.