The Week in Review
Here is a collection of highlights from this week’s news stream as reported by HPCwire.
Supercomputer, Supernodes, Superhero? Meet Flash “Gordon”
SDSC announced this week it had received a $20 million grant from the NSF to create a supercomputer dedicated to solving the pressing scientific and societal problems of our time. Gordon — as the system is called — will employ flash memory to help it achieve its goal of solving data-intensive applications 10x faster.
According to SDSC Interim Director Michael Norman, who is also the project’s principal investigator:
“This HPC system will allow researchers to tackle a growing list of critical ‘data-intensive’ problems. These include the analysis of individual genomes to tailor drugs to specific patients, the development of more accurate models to predict the impact of earthquakes on buildings and other structures, and simulations that offer greater insights into what’s happening to the planet’s climate.”
Gordon will be built on Appro’s Xtreme-X next-generation architecture with a mid-2011 timeframe, and will be made available to the research community through a network of high performance computers. The supercomputer will contain 32 “supernodes” based on the latest Intel processors and virtual shared-memory software from ScaleMP, connected by an InfiniBand network. All told, Gordon will possess 245 teraflops of total compute power, 64 terabytes of DRAM, 256 TB of flash memory, and four petabytes of disk storage – placing it among the top 30 supercomputers in the world.
NCSA Presents GPU Primer
GPUs are showing up everywhere lately, but how much do we know about them? Test your knowledge by checking out this explanatory piece, put together by IACAT and NCSA staff. It starts out:
Graphics processing units (GPUs) aren’t just for graphics anymore. These high-performance “many-core” processors are increasingly being used to accelerate a wide range of science and engineering applications, in many cases offering dramatically increased performance compared to CPUs.
Significant biomolecular, computational chemistry, astrophysical, condensed matter physics, weather modeling and seismic stack migration applications already have benefited substantially from or show substantial promise for using GPUs.
The Q&A runs the gamut from the most basic of questions: “How is a GPU different from a CPU?” to more advanced material, such as, “How do I adapt my application to use GPUs?”
The authors explain that even with the current GPU popularity, CPUs are still needed for certain tasks, such as accessing data from the disk and exchanging data between compute nodes in a multi-node cluster. Also CPUs are general purpose and GPUs are not, meaning GPUs don’t have much support for I/O devices, interrupts, and complex assembly instructions.
This is an excellent primer for anyone wishing to learn the basics of GPU computing, for those looking to refresh their GPU knowledge, or for industry veterans looking for a simple way to explain this material to less technical folks.