Visit additional Tabor Communication Publications
June 07, 2012
Despite the volatile economic conditions in Europe and Japan, both areas continue to be a hotbed of supercomputer deployments. This week alone, four new university systems were announced, two installed, with the other two on order.
First up are a couple of systems that have been deployed at two German universities in the state of Rhineland-Palatinate. According to the press release, both are already up and running, one at Johannes Gutenberg University Mainz, and the other at the University of Kaiserslautern.
The supercomputers will only be available to researchers at the two universities, as part of a joint HPC facility, known as the Alliance for High-Performance Computing Rhineland-Palatinate (AHRP). Access to the machines will be provided via a 120 Gbps network pipe connecting the Mainz and Kaiserslautern.
The Mainz system is a 287-teraflop cluster, known as "Mogon" (the Roman name for Mainz), while the University of Kaiserslautern will be host to a smaller machine, known as "Elwetritsch" (named after a mythical creature of southwest Germany). Elwetritsch is said to be about half the size of its Mainz sibling (although no flops rating was provided), and is slated for be expansion in 2013. The new systems will be host to an array of science and engineering applications in physics, mathematics, biology, medicine, and the geosciences.
Mogon and Elwetritsch came with a price tag of €5.5 million ($6.9 million), an investment that was shared between the German federal government, the German Research Foundation, and the two universities. System vendors were not revealed.
Meanwhile in the UK, the University of Leicester announced plans to install a multi-million pound (pound sterling, not tonnage) supercomputer there sometime this summer. The system will be dedicated to astronomy apps, supporting research in areas like dark matter studies, star formation, and black hole physics.
Once again, the machine's flops performance was not revealed, but the cost suggests something in the hundreds of teraflops range. HP will provide the system.
The fourth new supercomputer announced this week is a new Fujitsu PRIMEHPC FX10 machine for the University of Kobe, in Japan. The system will be used for "creating new fields of research and interdisciplinary areas utilizing supercomputer technology."
The PRIMEHPC FX10 is the commercial implementation of Japan's famous K computer, the current reigning champ of the TOP500. Although using the older generation SPARC64 VIIIfx CPU, the original K super delivers over 10 petaflops of performance. By contrast, the new SPARC64 IXfx-powered system to be installed at Kobe is a much smaller machine, and will deliver just 20 teraflops. It's scheduled for boot-up in August.
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.