This Week in HPC News

By Nicole Hemsoth

February 27, 2014

It’s been another packed week in high performance computing, with a bevy of new partnership, supercomputing installations and news about coming systems hitting our radar.

Interestingly, despite some of the top news items summarized below, one of the most fascinating stories this week (at least in terms of the massive numbers of views/listens) is the podcast interview with a certain mysterious John Fitzpatrick, who claims to have $50 billion lined up for his exascale-class supercomputer that he’ll be opening for currency trading (and donated time for science) in Oregon. Hard to hide the skepticism during the interview, but this couldn’t be ignored in case it actually (somehow) happens. Oh, and by the way, he says the system will be up and running in 2014. So there’s that. (Yes, that’s what I thought too)…

If you’re in the habit of listening, we try to keep things a bit more grounded. However, one more on the (more realistic) speculative technology side to consider is our interview with Dr. Larry Smarr, who talks about everything from new materials, quantum computers and the exascale systems of the future. Other topics included adapting Cray machines for whole genome analysis at Argonne, networks for powering climate research, architectural considerations for astrophysics applications, and more. A fun week, for sure.

Several news items to cover in no particular order…

hall_o_justissSuperlabs Unite for Supercomputing Prowess

A new collaboration of Oak Ridge, Argonne and Livermore (CORAL) will seek to develop systems in the 2017-2018 timeframe to support the research missions at their respective institutions.

A joint Request for Proposals for the CORAL procurement was issued Jan. 6 and responses were submitted Feb. 18. These are now being evaluated. The intention is that CORAL partners will select two different vendors and procure a total of three systems, two from one vendor and one from the other. Livermore is leading the procurement process.

Livermore’s system, to be called Sierra, will be best suited to support the applications critical to stockpile stewardship. Oak Ridge and Argonne will employ systems that meet the needs of their DOE Office of Science missions under the Advanced Scientific Computing Research (ASCR) program. Vendors are submitting test clusters now.

A3Cube Comes out of Stealth

The company emerged this week with its ‘brain inspired’ data plane encapsulated in a Network Interface Card (NIC) aimed at transforming storage networking to eliminate the I/O performance gap between CPU power and data access performance for HPC, Big Data and data center applications.

The RONNIEE Express data plane profoundly elevates PCI Express from a simple interconnect to a new intelligent network fabric, leveraging the ubiquity and standardization of PCIe while solving its inherent performance bottlenecks. A3CUBE’s In-Memory Network technology, for the first time, allows direct shared non-coherent global memory across the entire network, enabling global communication based on shared memory segments and direct load/store operations between the nodes. The result is the lowest possible latency, massive scalability and disruptive performance that is orders of magnitude beyond the capabilities of today’s network technologies including, Ethernet, InfiniBand and Fibre Channel.

a3cube

We spoke with the company’s Emilio Billi, CTO and founder of San Jose-based startup, A3Cube, who has picked up a thing or two over the last twenty years of addressing a range of performance bottlenecks at the storage and network levels. In addition to developing the HiDRA “personal supercomputing” and companion code, he helped develop the HyperTransport Consortium’s HyperShare scalable network technology and remains one of the leads behind that effort.

With his new company out of stealth and rushing headlong into a well-established storage and network ecosystem to serve the needs of both HPC and demanding big data environments, he admits that they’re up against some challenges. However, he makes the argument that A3Cube’s technology, which uses PCIe as the interconnect via an enhanced NIC card, can alter the price, performance, and programmability of modern HPC and data-intensive systems.

Billi says that five years ago, when he began work on A3Cube’s host of technology, he was looking for a way to combine storage, compute and fit this within the massively parallel analytics software that’s coming.  As he explained, doing this demanded the creation of “a 3D torus network interconnection data plane (it’s more of a data plane than an interconnection network) that has all the characteristics of supercomputing fabric but was designed specifically to create a massively parallel storage architecture.”

The argument is that storage systems need to take the leap from a few standalone engines to thousands of individual storage devices running in parallel to address the needs of true scale-out storage. This is managed by the PCIe-based approach they call the RONNIEE in-memory network.

As he describes, this is a completely new paradigm for networks that provides a whole application with transparent memory-to-memory direct connections. This “in-memory network discards the protocol stack bottleneck involve in remote memory access, which cuts the latency down even for user-level software.” The key is that the TCP/UPD stack is snatched from view and replaced with their own memory-to-memory mapped TCP/UDP socket as the performance hinge. He says it’s still possible to use RDMA if desired, but they’re adding to ease of programmability by the abstraction.

In other news…

Maxeler and the Science and Technology Facilities Council (STFC) and are collaborating in a project funded by the UK Department of Business Innovation and Skills to install the next generation of supercomputing technology in a new facility at the Daresbury Laboratory focusing on energy efficient supercomputing and offering orders of magnitude improvement in performance and efficiency to enable UK industry to have the edge in using a technology designed for the move towards Exascale computing.

The dataflow supercomputer will feature Maxeler developed MPC-X nodes capable of an equivalent 8.52TFLOPs per 1U and 8.97 GFLOPs/Watt, a performance per Watt that tops the Green500 today. MPC-X nodes build on the previous generation technology from Maxeler deployed at JP Morgan where real-time risk computation equivalent to 12000 x86 cores was achieved in 40U of dataflow engines. The new MPC-X supercomputer will be available in Summer 2014 and will focus on medical imaging and healthcare data analytics, manufacturing, industrial microscopy, large scale simulations, security, real-time operations risk, and media/entertainment.

Nova Southeastern University’s (NSU) Graduate School of Computer and Information Sciences has received a multimillion dollar IBM Supercomputer that will place NSU’s research at the forefront of computational biology, data mining, graphic visualization and software engineering.

Each of the 32 nodes will sport 16 Power CPU’s with 256 GB of RAM.  Each CPU has two processor units that can run two threads each. The machine is water-cooled using internal chilled plates and a rear cooling door on each rack. The software stack consists of AIX, General Parallel File System (GPFS), C++, Fortran, IBM Parallel Environment Runtime (PE), Engineering and Scientific Subroutine Library (ESSL), Parallel Engineering and Scientific Subroutine Library (PESSL), and Tivoli Workload Scheduler LoadLeveler. The university says it will use the system for scientific projects as well as to help train the next generation of HPC skilled graduates.

ScaleMP updates vSMP with Version 5.5 and uses the word “configurator” in a sentence, which is awesome. In addition, new pricing has been announced.

vSMP Foundation Version 5.5 is currently available for download or purchase. Highlights of the latest version include, in addition to the “an open, online configurator”–

  • Enhanced hardware support
  • Broader IO support options with AnyIO subsystem. With AnyIO, customers can enable aggregation with almost any device:
  • Any network device, such as 10GigE cards as well as Infiniband cards
  • PCI-flash devices such as Fusion-IO
  • GPUs/accelerators such as Intel Xeon Phi or NVIDIA GPUs
  • Enhanced private interconnect options
  • Mellanox Connect-IB
  • Improved performance for Intel TrueScale
  • Support for recent Intel Xeon Processors: Ivy Bridge – E5-2600v2 and E7-x8xx v2
  • Enhanced performance for IO-intensive and large-memory workloads
  • New flexible pricing model allowing lower price points

For the Presenter in You…

Just a couple of reminders–Student Cluster Competition Applications Are Now Being Accepted and PRACE 8th Call for Proposals Closes with Larger Allocations on All Systems

Thanks again for tuning in this week—back again Monday with more podcasts, announcements and in-depth features.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NSF Extends Access to Its Leadership Systems Blue Waters & Frontera

December 14, 2018

The National Science Foundation is seeking supplemental requests for access on its leadership-class computers Blue Waters and Frontera to enable "fundamental science and engineering research that would otherwise not be p Read more…

By Staff

CFD on ORNL’s Titan Simulates Cleaner, Low-MPG ‘Opposed Piston’ Engine

December 13, 2018

Pinnacle Engines is out to substantially improve vehicle gasoline efficiency and cut greenhouse gas emissions with a new motor based on an “opposed piston” design that the company hopes will be widely adopted while t Read more…

By Doug Black

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC) is procuring from Atos in two phases over the next year-an Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

4 Ways AI Analytics Projects Fail — and How to Succeed

“How do I de-risk my AI-driven analytics projects?” This is a common question for organizations ready to modernize their analytics portfolio. Here are four ways AI analytics projects fail—and how you can ensure success. Read more…

Nvidia Leads Alpha MLPerf Benchmarking Round

December 12, 2018

Seven months after the launch of its AI benchmarking suite, the MLPerf consortium is releasing the first round of results based on submissions from Nvidia, Google and Intel. Of the seven benchmarks encompassed in version Read more…

By Tiffany Trader

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Nvidia Leads Alpha MLPerf Benchmarking Round

December 12, 2018

Seven months after the launch of its AI benchmarking suite, the MLPerf consortium is releasing the first round of results based on submissions from Nvidia, Goog Read more…

By Tiffany Trader

IBM, Nvidia in AI Data Pipeline, Processing, Storage Union

December 11, 2018

IBM and Nvidia today announced a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based DGX-1 AI server to pr Read more…

By Doug Black

Mellanox Uses Univa to Extend Silicon Design HPC Operation to Azure

December 11, 2018

Call it a corollary to Murphy’s Law: When a system is most in demand, when end users are most dependent on the system performing as required, when it’s crunch time – that’s when the system is most likely to blow up. Or make you wait in line to use it. Read more…

By Doug Black

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--the study of shapes--seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar concepts, so it is intriguing to see that many applications are being recast to use topology. For instance, looking for weather and climate patterns. Read more…

By James Reinders

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

December 5, 2018

The National Academies of Science, Engineering, and Medicine yesterday released a report – Quantum Computing: Progress and Prospects – whose optimism about Read more…

By John Russell

Revisiting the 2008 Exascale Computing Study at SC18

November 29, 2018

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the the Read more…

By Scott Gibson

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

The Convergence of Big Data and Extreme-Scale HPC

August 31, 2018

As we are heading towards extreme-scale HPC coupled with data intensive analytics like machine learning, the necessary integration of big data and HPC is a curr Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This