Filippo Mantovani on What’s Next for Mont-Blanc and ARM

By John Russell

July 6, 2015

Firing up the Mont-Blanc prototype in mid-June at the Barcelona Supercomputing Center (BSC) was a significant milestone in the European effort to base HPC systems on energy efficient architecture. Mont-Blanc program coordinator Filippo Mantovani was quoted in the release announcing the prototype saying, “Now the challenge starts because with this platform we can foresee how inexpensive technologies from the mobile market can be leveraged for traditional scientific high-performance workloads.”

Begun in 2011, the Mont-Blanc Project is European effort intended to explore new ways to achieve energy efficient architecture for supercomputing (See the 2013 Mont-Blanc paper, Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC?”). Recently, the project received a three extension to further develop the OmpSs parallel programming model to automatically exploit multiple cluster nodes, transparent application check pointing for fault tolerance, support for ARMv8 64-bit processors, and the initial design of the Mont-Blanc exascale architecture.

The prototype installed in the Torre Girona chapel is made up of a total of two racks containing 8 standard BullX chassis, 72 compute blades fitting 1080 compute cards, for a total of 2160 CPUs and 1080 GPUs. The heterogeneous architecture of the Mont-Blanc prototype takes advantage of computing elements (CPUs and GPUs) developed by ARM and integrated by BULL under the design guidance of all Mont-Blanc partners.

This use of the ARM architecture is an early demonstration that it may have applicability at the high end of computing. HPCwire talked with Mantovani about some of the challenges and promise that lie ahead.

What further enhancements to the ARM architecture are needed to maintain progress towards higher performance and what changes do you expect over the next few years?

It depends which ARM processors are we looking at. Enhancements of mobile System on Chips (SoCs) are driven by big producers of mobile devices (Apple, Samsung, Huawei, etc.). From this market we will see surprisingly good and increasingly powerful SoCs, but I consider unlikely that one of them will be integrated as-is in a high-end HPC system, unless some of these big players want to enter HPC market. Due to its cost effectiveness, I [still] consider [that] mobile technology is extremely interesting for compute intensive embedded applications as well as small labs and companies looking for cheap/mobile/easy scientific computation, not necessarily in the HPC area.

If we are looking at ARM processors in the server market, then the things are slightly different. The ARM-based chips for servers, in fact, seem to evolve fast and [are becoming] more popular (X-Gene, Cavium ThunderX). Strangely enough, I consider it more urgent to have reliable and unified software support for the ARM platforms appearing on the market, than adding specific features to the silicon. This support would allow ARM technology to be “better socially accepted” within the HPC community. In this sense, Mont-Blanc is going to contribute with this system software stack and programming model, but in terms of compilers a strong contribution from IP designers and SoC producers is [still] required.

What are the missing or weaker parts of the HPC ecosystem required to support continued progress of the ARM-based architecture approach? How are those pieces likely to be developed or strengthened?

Decoupling the production of HPC solutions among IP providers, SoC producers and system integrators can increase competitiveness with benefits for the diversification of solutions and prices; but it can also drive to fragmentation. HPC system integrators are mostly conservative: they are definitely not used to working with mobile technology and also ARM-based server solutions are still not 100% in the production lines of big HPC players. We saw some interesting movement during last SC in New Orleans and I really hope to see even more activity in this direction soon at ISC in Frankfurt.

I think that the real difference could be done now by a good, large, stable and most importantly open-source software support to the ARM-architecture, especially for HPC. I am thinking of compilers, support for hardware counters, parallel debuggers, performance analysis tools, etc. but also programming models that can support the proliferation of threads, the heterogeneity and the different ARMv8 implementations appearing on the market. In this sense, Mont-Blanc is doing a huge effort porting and promoting not only the development tools, but also the OmpSs programming model.

Given the prospect of reduced cost – power and hardware – do you expect ARM-based HPC to further ‘democratize’ HPC and spur adoption by industry sectors and smaller companies previously unable to afford advanced compute resources?

HPC remains mostly an “elite” market. I think however that there are several companies and small labs that have HPC-like problems, looking for accessible compute solutions. In this sense, yes, I believe that ARM-based scientific computation has a great potential. You ask for adopters? I do not have a crystal ball, but I see automotive as a potential growing market. Another field that could take advantage of cost-effective solutions could be personalized medicine. As I said, I see the potential, but I do not know how fast each of these communities reacts to new technologies appearing on the market.

Maybe less directly profitable, but I think we should not ignore the educational impact of parallel ARM-based platforms. Parallela is a worldwide example, but I think that also the fact that a team of six students will take part to the “Student Cluster Competition” at ISC’15 for the first time in the history of the contest with an ARM-based cluster (part of the Mont-Blanc prototype) must to be taken into account. Parallel, accessible and powerful platforms will help new generation of students to grow from day-zero thinking in parallel and taking into account power limitations.

What do you see as the most significant technical problems the Mont-Blanc project must solve now to achieve the next level of performance. Will new technologies be needed to solve some of these issues?

“I think that we can still extract a significant amount of information from our “large” prototype: performance evaluation at level of compute node, at system level, at level of applications, at level of fault tolerance, at level of energy to solution and at level of programmability. We will continue studying on our unique platform, this is sure.

We will approach next level of performance exploring ARM 64-bit instruction set, mostly with platforms available on both markets, server and mobile. On the software side we will continue the exploration using a larger and more complete set of performance analysis tools and boosting our task based programming model OmpSs.”

Considering the hurdles ahead, do you think an exascale system based the ARM/GPU architecture will be built and roughly when do you think we might expect it? Will we ever see a system such as this in the Top500?

“In general, for classical HPC, I consider [the] exascale target still too blurry for giving a clear prediction. Even less, unfortunately, can I foresee concerning ARM/GPU based solutions. For sure the exascale race is wider than simply finding the right technology for floating point computations: it involves memory technology, interconnection network, distributed I/O, fault tolerance and many other hardware and software aspects. In this wider approach to next generation HPC systems, I consider ARM as one of the players with great potential.”

What were the important lessons learned from the End-User Group – Rolls Royce, for example – and how will they inform Mont-Blanc development going forward? Can you identify specific issues that will need to be addressed?

The End-User Group (EUG) is an extremely valuable dissemination tool for the project, but most importantly a virtual gate for letting companies entering the development of the project. The fixed appointments are a yearly meeting with the end-users, plus the training that the project opens to the partners and to the EUG as well.

You mentioned Rolls Royce: we had very fruitful interaction during the first year of collaboration, so we decided to invite a representative to show Rolls Royce work on one of the Mont-Blanc mini-cluster at the satellite event of the PRACEdays in Dublin. The title of the workshop was emblematic, “Enabling Exascale in Europe for Industry”, and we really wanted to leave space to one of our end-users, to understand the tests performed and listen at the requirements.

I think it has been a really productive interaction and I hope that from now on, with 1000 nodes of the Mont-Blanc prototype up and running, this can evolve further, involving several other companies interested in testing the Mont-Blanc platforms.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

ASC18: Tough Applications & Tough Luck

May 17, 2018

The applications at the ASC18 Student Cluster Competition were tough. Tougher than the $3.99 steak special at your local greasy spoon restaurant. The apps are so tough that even Chuck Norris backs away from them slowly. Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and the technology challenges ahead. These discussions happened in Read more…

By Alex R. Larzelere

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

PRACE 2017 Annual Report: Exascale Aspirations; Industry Collaboration; HPC Training

May 15, 2018

The Partnership for Advanced Computing in Europe (PRACE) today released its annual report showcasing 2017 activities and providing a glimpse into thinking about Read more…

By John Russell

US Forms AI Brain Trust

May 11, 2018

Amid calls for a U.S. strategy for promoting AI development, the Trump administration is forming a senior-level panel to help coordinate government and industry research efforts. The Select Committee on Artificial Intelligence was announced Thursday (May 10) during a White House summit organized by the Office of Science and Technology Policy (OSTP). Read more…

By George Leopold

Emerging Advanced Scale Tech Trends Focus of Annual Tabor Conference

May 9, 2018

At Tabor Communications' annual Advanced Scale Forum (ASF) held this week in Austin, the focus was on enterprise adoption of HPC-class technologies and high performance data analytics (HPDA). It’s a confab that brings together end users (CIOs, IT planners, department heads) and vendors and encourages... Read more…

By the Editorial Team

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Leading Solution Providers

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This