This Week in HPC News

By Nicole Hemsoth

March 28, 2014

This week’s HPC news wrap comes from high above the U.S. during a fly-back from the GPU Technology Conference, which took place in San Jose, California. NVIDIA touted record attendance for the event, which is in its fifth year. Eye candy aside (there’s always plenty to be found at a graphics-centric show) we were thrilled with the level of high-end computing investment in terms of talks, poster sessions and new product sneak peeks.

As we discussed in some details around a few key announcements on the interconnect and future roadmap fronts, there was no shortage of HPC to be found—in fact, it seemed the supercomputing sessions had been stepped up considerably, in part due to the efforts of Jack Wells from Oak Ridge, who chaired the extreme scale GPU computing series at GTC this year. More on that in the context of GPU usage on Titan in particular can be found here.

GTCPhotoWhile a great deal of content at GTC focused on HPC, there was quite a bit of conversation around large-scale data analysis, graph analytics, and the future of platforms designed to tackle massive datasets with high performance approaches. While not GPU-specific, just before leaving for GTC we spoke on the podcast with Dr. Geoffrey Fox from Indiana University about how the two worlds of HPC and “big data” are blending (and also opposed)—a great overview for those interested.

We’ll be better equipped to put GPU and accelerator/coprocessor momentum (not to mention all of the work that’s being done to address data-intensive computing needs) for HPC into a more focused spotlight early April as we catch up with IDC at their User Forum event in Santa Fe.

While we were in GPU land, the news cycle refreshed with a few important system upgrade and new build announcements. Without further delay…

This Week’s Top News Items

riken120x48Fujitsu put the finishing touches on a supercomputer ordered by the SPring-8 Center, a part of RIKEN, Japan’s largest comprehensive research institution.

At the core of the new system is the FUJITSU Supercomputer PRIMEHPC FX10. Due to commence operation in April 2014, it will have a theoretical peak performance of 90.8 teraflops (TFLOPS).

The RIKEN SPring-8 Center currently plans to use the K computer to analyze the enormous volumes of data being generated by the SACLA X-ray free-electron laser, with the goal of understanding the structures and functions of nanomaterials.

archerThe £43 million supercomputer has been announced at the University of Edinburgh. The system will provide high performance computing support for research and industry projects in the UK.

ARCHER (an acronym for Academic Research Computing High End Resource) will help researchers carry out sophisticated, complex calculations in diverse areas such as simulating the Earth’s climate, calculating the airflow around aircraft, and designing novel materials.

The French Alternative Energies and Atomic Energy Commission (CEA) – working on behalf of F4E to implement and run the datacenter for nuclear fusion at Rokkasho in Japan – is expanding the power of the Helios supercomputer by equipping it with additional bullx nodes featuring Intel Xeon Phi coprocessors.

Helios, which is designed and operated by Bull, supports research work aimed at controlling nuclear fusion, so as to refine a sustainable energy source that produces no carbon dioxide emissions or other greenhouse gasses. The system provides modeling and simulation capacity which is open to all European and Japanese researchers under the ‘Broader Approach’, a research program that complements the international cooperative ITER program.

Super Micro has debuted the first server of its new Ultra Architecture SuperServer series, the 2U 2-Node UltraTwin. This new 2U SuperServer features two hot-swappable 1U nodes each supporting dual Intel Xeon E7-2880 v2 processors, up to 1TB in 32x DIMM slots, 2x 2.5″ NMVe SSDs, 8x 12Gb/s SAS 3.0 2.5″ HDD/SSDs, PCI-E 3.0 expansion in 2x full height, half length and 1x MicroLP cards and onboard support for 2x 10GBase-T ports.

UltraTwin supports redundant 1280W (1+1) Platinum Level High-Efficiency (95%) Digital Switching power supplies powering new proprietary serverboards designed to maximize compute/memory density and eliminate CPU pre-heat. High core counts, memory capacity and accelerated storage technologies combined with wide I/O bandwidth make this new system perfectly suited for virtualization applications and high memory bandwidth in datacenter and HPC clusters.

The NSF has awarded a $500,000 grant to researchers at Texas Tech University to develop a new supercomputer prototype that could lead to more efficient data-intensive computing – and speed-up the scientific discovery cycle.

The team’s goal is to create a supercomputer that will enable academic departments, cross-disciplinary units and collaborators to analyze and utilize their data, and put them to use with accuracy, speed and efficiency. They will rather spend the majority of their time manipulating data, rather than doing actual computing. The amount of computing time is significantly less, than the data access/movement time.

On the Road

We are booked and ready to roll for a few upcoming events, including the IDC User Forum in Santa Fe and of course, the International Supercomputing Conference in Leipzig, Germany. In between, there are a few other events and happenings on the horizon:

PRACE Announces Summer of HPC

Speakers Announced for Upcoming Leverage Big Data 2014 Summit

A Final Note…Our Sympathies

Ricky Kendall, former Group Leader for Scientific Computing and NCCS Chief Computational Scientist, passed away on Tuesday, 18 March 2014, following a heart attack. He was 53 years old. Ricky was critical to building the Oak Ridge Leadership Computing Facility, and building our Scientific Computing Group in particular. His ‘whatever it takes’ attitude clearly helped set the tone for the success of what has been a very ambitious Leadership Computing initiative. Indeed, Ricky was formally recognized for his leadership at the ORNL 2011 Honors and Awards ceremonies.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Do Cryptocurrencies Have a Part to Play in HPC?

February 22, 2018

It’s easy to be distracted by news from the US, China, and now the EU on the state of various exascale projects, but behind the vinyl-wrapped cabinets and well-groomed sales execs are an army of Excel-wielding PMO and Read more…

By Chris Downing

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource managed by the institution’s Advanced Center for Computing and C Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Extreme Performance Solutions

Experience Memory & Storage Solutions that will Transform Your Data Performance

High performance computing (HPC) has revolutionized the way we harness insight, leading to a dramatic increase in both the size and complexity of HPC systems. Read more…

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HPE to provide the DoD High Performance Computing Modernizatio Read more…

By Tiffany Trader

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource manage Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This