This week’s HPC news wrap comes from high above the U.S. during a fly-back from the GPU Technology Conference, which took place in San Jose, California. NVIDIA touted record attendance for the event, which is in its fifth year. Eye candy aside (there’s always plenty to be found at a graphics-centric show) we were thrilled with the level of high-end computing investment in terms of talks, poster sessions and new product sneak peeks.
As we discussed in some details around a few key announcements on the interconnect and future roadmap fronts, there was no shortage of HPC to be found—in fact, it seemed the supercomputing sessions had been stepped up considerably, in part due to the efforts of Jack Wells from Oak Ridge, who chaired the extreme scale GPU computing series at GTC this year. More on that in the context of GPU usage on Titan in particular can be found here.
While a great deal of content at GTC focused on HPC, there was quite a bit of conversation around large-scale data analysis, graph analytics, and the future of platforms designed to tackle massive datasets with high performance approaches. While not GPU-specific, just before leaving for GTC we spoke on the podcast with Dr. Geoffrey Fox from Indiana University about how the two worlds of HPC and “big data” are blending (and also opposed)—a great overview for those interested.
We’ll be better equipped to put GPU and accelerator/coprocessor momentum (not to mention all of the work that’s being done to address data-intensive computing needs) for HPC into a more focused spotlight early April as we catch up with IDC at their User Forum event in Santa Fe.
While we were in GPU land, the news cycle refreshed with a few important system upgrade and new build announcements. Without further delay…
This Week’s Top News Items
At the core of the new system is the FUJITSU Supercomputer PRIMEHPC FX10. Due to commence operation in April 2014, it will have a theoretical peak performance of 90.8 teraflops (TFLOPS).
The RIKEN SPring-8 Center currently plans to use the K computer to analyze the enormous volumes of data being generated by the SACLA X-ray free-electron laser, with the goal of understanding the structures and functions of nanomaterials.
ARCHER (an acronym for Academic Research Computing High End Resource) will help researchers carry out sophisticated, complex calculations in diverse areas such as simulating the Earth’s climate, calculating the airflow around aircraft, and designing novel materials.
The French Alternative Energies and Atomic Energy Commission (CEA) – working on behalf of F4E to implement and run the datacenter for nuclear fusion at Rokkasho in Japan – is expanding the power of the Helios supercomputer by equipping it with additional bullx nodes featuring Intel Xeon Phi coprocessors.
Helios, which is designed and operated by Bull, supports research work aimed at controlling nuclear fusion, so as to refine a sustainable energy source that produces no carbon dioxide emissions or other greenhouse gasses. The system provides modeling and simulation capacity which is open to all European and Japanese researchers under the ‘Broader Approach’, a research program that complements the international cooperative ITER program.
Super Micro has debuted the first server of its new Ultra Architecture SuperServer series, the 2U 2-Node UltraTwin. This new 2U SuperServer features two hot-swappable 1U nodes each supporting dual Intel Xeon E7-2880 v2 processors, up to 1TB in 32x DIMM slots, 2x 2.5″ NMVe SSDs, 8x 12Gb/s SAS 3.0 2.5″ HDD/SSDs, PCI-E 3.0 expansion in 2x full height, half length and 1x MicroLP cards and onboard support for 2x 10GBase-T ports.
UltraTwin supports redundant 1280W (1+1) Platinum Level High-Efficiency (95%) Digital Switching power supplies powering new proprietary serverboards designed to maximize compute/memory density and eliminate CPU pre-heat. High core counts, memory capacity and accelerated storage technologies combined with wide I/O bandwidth make this new system perfectly suited for virtualization applications and high memory bandwidth in datacenter and HPC clusters.
The NSF has awarded a $500,000 grant to researchers at Texas Tech University to develop a new supercomputer prototype that could lead to more efficient data-intensive computing – and speed-up the scientific discovery cycle.
The team’s goal is to create a supercomputer that will enable academic departments, cross-disciplinary units and collaborators to analyze and utilize their data, and put them to use with accuracy, speed and efficiency. They will rather spend the majority of their time manipulating data, rather than doing actual computing. The amount of computing time is significantly less, than the data access/movement time.
On the Road
We are booked and ready to roll for a few upcoming events, including the IDC User Forum in Santa Fe and of course, the International Supercomputing Conference in Leipzig, Germany. In between, there are a few other events and happenings on the horizon:
A Final Note…Our Sympathies
Ricky Kendall, former Group Leader for Scientific Computing and NCCS Chief Computational Scientist, passed away on Tuesday, 18 March 2014, following a heart attack. He was 53 years old. Ricky was critical to building the Oak Ridge Leadership Computing Facility, and building our Scientific Computing Group in particular. His ‘whatever it takes’ attitude clearly helped set the tone for the success of what has been a very ambitious Leadership Computing initiative. Indeed, Ricky was formally recognized for his leadership at the ORNL 2011 Honors and Awards ceremonies.