HPC is waking up to its own spring this week, especially on the new cluster announcement front.
The most notable story in that vein is the formal detailing of the anticipated NERSC-8 (dubbed “Cori”), which we’ll certainly be watching as it’s set to be outfitted with the next-generation Knight’s Landing architecture housed inside a Cray XC. More details (and to answer some of your emailed questions, that’s all they would give us information-wise due to NDAs) will hopefully emerge when it arrives in late 2015, particularly on the unique burst buffer component that’s expected.
Speaking of burst buffers, following a talk on the subject from the man who coined the phrase (and thereby set off the buzz around this emerging area), Los Alamos Lab’s Gary Grider, we got a better sense of why these are increasingly important in large-scale and pre-exascale systems. In addition to exploring some of the economics, he set forth how they might move from pure storage to active, smart devices that can be integrated into workflows following some software work over the next few years.
Going back to the new cluster news, a few universities and research centers have christened new system this week. Building on the success of the Mills High-Performance Computing (HPC) cluster, the University of Delaware is deploying a second community cluster to perform complex computational tasks for researchers in engineering; physical, natural, social, policy and decision sciences and finance. While they haven’t shared details on vendor or architecture, the 200-node Mills system was outfitted with Interlagos processors with Infiniband ties. We’ll keep our ears open for more on that machine.
Rutgers Univeristy-Newark is about to add a $700,000 cluster from an unnamed vendor to its arsenal. NM3 (for Newark Massive Memory Machine), will have “1,500 processors and massive amounts of shared random-access memory (RAM) – with all of the CPUs performing complex tasks simultaneously and transmitting data among themselves efficiently. It will also contain significant data-storage capacity.” Again, pretty light on detail, but congrats to the university.
Congratulations also to Wayne State University (WSU), which is the recipient of the Silicon Mechanics 3rd Annual Research Cluster Grant, a program in which the company and its partners are donating a complete high-performance compute cluster. The eight-node system is outfitted with Xeon Phi and NVIDIA GPUs is comprised of hardware and software donated by Intel, NVIDIA, HGST, Mellanox Technologies (gigabit Ethernet), Supermicro, Seagate, Kingston Technology, Bright Computing, and LSI Logic.
On the research center front, Massachusetts General Hospital and Harvard Medical School’s Center for Advanced Medical Imaging Sciences (CAMIS) Radiology Department has selected SGI to enhance research capabilities for diagnostic medical imaging. CAMIS ultimately chose industry standard SGI UV 2000 and SGI UV 20 large shared-memory systems and SGI InfiniteStorage 5000. CAMIS purchased the solution through ComnetCo, an SGI channel partner.
More Top News This Week
Red Hat has announced that it plans to acquire Inktank, a leading provider of scale-out, open source storage systems. Inktank’s flagship technology, Inktank Ceph Enterprise, is focused on object and block storage software to enterprises deploying public or private clouds, including many early adopters of OpenStack clouds. Combined with Red Hat’s existing GlusterFS-based storage offering, the addition of Inktank positions Red Hat as the leading provider of open software-defined storage across object, block and file system storage.
On April 25, the largest student supercomputer challenge, ASC14, concluded with great success. Shanghai Jiao Tong University (China) was the champion, and Nanyang Technological University (Singapore) won the silver prize. “The Highest Linpack award” went to Sun Yat-sen University. The brand new “e Prize” was awarded to Shanghai Jiao Tong University. More details about the event can be found here: http://www.hpcwire.com/off-the-wire/asc14-winners-announced/
Mellanox is working with IBM to deliver a high-performance infrastructure for NoSQL Databases and In-Memory Data Grids (IMDGs), utilizing IBM Power8-based systems and Mellanox 40GbE adapters and switches featuring RDMA over Converged Ethernet (RoCE). They will support the co-location of data computation in a distributed fashion. Mellanox says this modular approach enables a 10X increase of application throughput while reducing latency by more than 7X.
X-ISS has released Version 14.1 of its DecisionHPC software for HPC cluster management. Upgrades include more comprehensive cluster Scheduler Reports and an Attribute Heat Map visualization capability, which shows a graphical representation of cluster data where the individual values contained in a matrix are represented as colors showing which users, projects and applications are processing and where they are running on the system.
SanDisk Corporation announced the Lightning Gen. II family of enterprise-class 12Gb/s Serial Attached SCSI (SAS) SSDs. The new Lightning Gen. II SSD product family doubles interface speeds over previously available 6Gb/s SSDs. The product family will be available for sampling with select OEM customers and through the channel in the third quarter of 2014.
We will see out and about this month and through the summer. Of course, at the International Supercomputing Conference in June, TeraTec in France in early July…but before that, at our own Leverage Big Data Summit being held in late May in San Diego.