GPUs Add Up For ARM Chips In HPC

By Timothy Prickett Morgan

June 23, 2014

The first wave of credible 64-bit ARM processors are coming to market late this year or early next, and as is usually the case, the high-performance computing community is getting first crack at figuring out how these chips might be deployed to run various kinds of simulations more efficiently or cost effectively.

Applied Micro, which has first mover status in the 64-bit ARM server chip race with its X-Gene 1, is teaming up with Nvidia, maker of the Tesla GPU accelerators, at the International Super Computing conference in Leipzig, Germany to promote X-Gene and Tesla as the first of several dynamic duos. Three vendors – Cirrascale, E4 Computer Engineering, and Eurotech – are also previewing hybrid ARM-Tesla systems at the conference, and others will no doubt follow soon as more ARM chips come to market towards the end of this year and into early next year.

Given the ubiquity of Xeon processors in the supercomputing space, Nvidia has to integrate well with rival Intel’s Xeon processors and has to compete against the Xeon Phi parallel X86 coprocessors, too. But Nvidia, like many system buyers, wants a second or third option when it comes to processors, and that is why Nvidia was a founding member of the OpenPower Foundation, which seeks to establish multiple sources of IBM’s Power8 and follow-on processors and to link accelerators tightly to them. Nvidia is also waving the ARM banner high as well, and wants to be the accelerator of choice for ARMv8 platforms.

“GPUs make 64-bit ARM competitive in HPC on day one,” explains Ian Buck, general manager of GPU computing software at Nvidia. “We are clearly seeing viable and compelling ARM64 platforms coming online. It is obvious that there is excitement around ARM, and there are two reasons for that. One is that we haven’t had new, innovative CPUs for a while. Some of the ARM architectures are going up to 24 cores, and they are playing with what is on die, what is off, and Broadcom and Cavium come from the networking world and there are lots of networking angles they can play. The second reason for the excitement is choice. ARM represents choice, and a very diverse one.

nvidia-arm-hpc

While network devices like to have plenty of threads, the chips used in such gear are not generally equipped with lots of floating point math processing capability, says Buck. Nvidia, you can quickly guess, wants its Tesla to be the coprocessor of choice for 64-bit ARM platforms. Having created the CUDA programming environment, which supports 64-bit ARM chips starting with the 6.5 release, and a library of hundreds of third party simulation and analytics workloads to hybrid processor-GPU, Nvidia thinks it is well placed to help customers port their applications to ARM-Tesla hybrids.

“Based on our experience with ARM to date, the porting seems to go fairly quickly if you have well-structured code,” says Buck. “A lot of HPC codes have been around long enough that they don’t have a lot of intrinsics in there, the X86isms, and code seems to move fairly easily. If the code is already GPU-accelerated, then the performance just carries straight over. These ARM64 chips can drive full GPU performance.”

Applied Micro is going to have plenty of competition in the ARMv8 processor space, with AMD, Cavium, and Broadcom all putting forth very strong contenders to go up against the hegemony of Intel’s Xeon processors and its very credible defensive position with Atom chips for modest compute and low-power needs. Intel has a substantial lead in chip manufacturing processes – something between one and two nodes, depending on how you want to count it – and is behaving as if it has a bunch of AMDs on its heels. Never before in its history has Intel been so willing to tweak its processor designs to make them better fit the workloads of supercomputing and hyperscale customers alike, from adding special instructions to Xeons to baking special versions of the Xeons that run hotter or clock higher to actually welding an FPGA into a Xeon chip, as Intel last week announced it was going to do.

This newfound openness is one way Intel is going to counter the onslaught of different 64-bit ARM processors and the various ways their makers will accelerate workloads using GPUs, DSPs, FPGAs, and other specialized circuits. In effect, Intel is adopting the malleable approach of the ARM community to defend against ARM processors.

The initial X-Gene 1 processor from Applied Micro has been sampling since early 2013, and production wafers for the chip were started at the end of March and production chips are due around now. The X-Gene 1 chip is implemented in a 40 nanometer process at Taiwan Semiconductor Manufacturing Corp; it has eight custom ARMv8 cores, designed by Applied Micro itself, on each system-on-chip. The cores on the X-Gene 1 run at 2.4 GHz, and Sanchayan Sinha, senior product manager, tells HPCwire that in terms of single-threaded performance, the X-Gene 1 has about the same level of oomph as a four-core “Haswell” Xeon E3 and about the same memory bandwidth as a “Sandy Bridge” Xeon E5.

Sinha stressed that these were very rough comparisons and that real benchmarks would eventually result in harder figures than these approximations. That is, in fact, what the development systems being shown off at ISC’14 are all about. The company is working with server partners to run the High Performance Conjugate Gradients (HPCG) benchmark, which is being proposed as a follow-on to the more widely used Linpack parallel Fortran matrix math test, on X-Gene 1 systems. Sinha says that Applied Micro and Nvidia will be able to show that an X-Gene 1 plus a Tesla K20 coprocessor will be equivalent to an X86 processor plus the same Tesla K20 floating point motor.

x-gene-1-block

The X-Gene 2 chip is a rev on the initial design and also includes eight ARM cores, but it is implemented in a 28 nanometer process at TSMC. This shrink of the process will allow Applied Micro to crank up the clock speed and add more features to its SoC. One interesting feature that the company has divulged it will add to the X-Gene 2 is support for Remote Direct Memory Access (RDMA) on the network ports on the chip. Specifically, the Ethernet ports on the chip will be able to run RDMA over Converged Ethernet (RoCE), which brings the low-latency access of InfiniBand to the Ethernet protocol. This will make the X-Gene 2 chip not only suitable for HPC workloads that are latency sensitive, but also for database, storage, and transaction processing workloads in enterprise datacenters that also like low latency.

Further out beyond this, Applied Micro has teamed up with TSMC to use its 16 nanometer FinFET 3D transistor process to create X-Gene 3. Little is known about this processor except that it will have at least 16 cores on the SoC.

This early revs of the X-Gene 1 were put on development boards called “Mustang” internally by Applied Micro and known as the X-Gene XC-1 outside of the company. The ARM-based HPC systems that are being previewed by Cirrascale and E4 Computer Engineering are based on production-grade X-Gene 1 chips and the Mustang boards.

The Cirrascale development machine puts two Mustang boards and two Tesla K20 or K20X GPU accelerators in a compact 1U server chassis:

cirrascale-x-gene

This machine is called the RM1905D in the Cirrascale product catalog, and like other Mustang board it supports a maximum of 64 GB of memory for each X-Gene 1 chip across the processor’s two memory slots. The system has four Ethernet ports: three for data and one for system management. Two of the ports for data exchange run at 1 Gb/sec and the remaining one runs at 10 Gb/sec; the management port runs at 1 Gb/sec. The Mustang board has one PCI-Express 3.0 x8 slot, which is used to link the processor to the Tesla GPU, and the chassis has room to plug in a single SATA-2 drive (a 6 Gb/sec link). Each node in the chassis has a 400 watt power supply.

The feeds and speeds of E4 Computer Engineering’s EK003 were not available at press time, but Nvidia tells HPCwire that the machine will include two X-Gene 1 system boards in a 3U enclosure that has two Tesla K20 GPU coprocessors, and that the development machine will be aimed at seismic, signal and image processing, video analytics, track analysis, Web applications, and MapReduce workloads.

Cirrascale and E4 Computer Engineering plan to ship their development machines in July, according to Nvidia.

Eurotech has a custom motherboard design using the X-Gene 1 chip that has main memory soldered onto the board to give it a very low profile and therefore high density for its ARM-based Aurora system. The compute elements in this new Aurora machine are based on what the company calls its “brick technology,” and will employ direct hot-water cooling of the components in the brick. It will include a combination of ARM processors and Tesla coprocessors. Further details for this Eurotech Aurora system were not yet available at press time, but we will hunt them down. The company expects to ship production machines later this year.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, provided a brief but insightful portrait of Nvidia’s rese Read more…

By John Russell

ORNL Helps Identify Challenges of Extremely Heterogeneous Architectures

March 21, 2019

Exponential growth in classical computing over the last two decades has produced hardware and software that support lightning-fast processing speeds, but advancements are topping out as computing architectures reach thei Read more…

By Laurie Varma

Interview with 2019 Person to Watch Jim Keller

March 21, 2019

On the heels of Intel's reaffirmation that it will deliver the first U.S. exascale computer in 2021, which will feature the company's new Intel Xe architecture, we bring you our interview with our 2019 Person to Watch Jim Keller, head of the Silicon Engineering Group at Intel. Read more…

By HPCwire Editorial Team

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Insurance: Where’s the Risk?

Insurers are facing extreme competitive challenges in their core businesses. Property and Casualty (P&C) and Life and Health (L&H) firms alike are highly impacted by the ongoing globalization, increasing regulation, and digital transformation of their client bases. Read more…

What’s New in HPC Research: TensorFlow, Buddy Compression, Intel Optane & More

March 20, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, Read more…

By John Russell

At GTC: Nvidia Expands Scope of Its AI and Datacenter Ecosystem

March 19, 2019

In the high-stakes race to provide the AI life-cycle solution of choice, three of the biggest horses in the field are IBM, Intel and Nvidia. While the latter is only a fraction of the size of its two bigger rivals, and has been in business for only a fraction of the time, Nvidia continues to impress with an expanding array of new GPU-based hardware, software, robotics, partnerships and... Read more…

By Doug Black

Nvidia Debuts Clara AI Toolkit with Pre-Trained Models for Radiology Use

March 19, 2019

AI’s push into healthcare got a boost yesterday with Nvidia’s release of the Clara Deploy AI toolkit which includes 13 pre-trained models for use in radiolo Read more…

By John Russell

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Quick Take: Trump’s 2020 Budget Spares DoE-funded HPC but Slams NSF and NIH

March 12, 2019

U.S. President Donald Trump’s 2020 budget request, released yesterday, proposes deep cuts in many science programs but seems to spare HPC funding by the Depar Read more…

By John Russell

Nvidia Wins Mellanox Stakes for $6.9 Billion

March 11, 2019

The long-rumored acquisition of Mellanox came to fruition this morning with GPU chipmaker Nvidia’s announcement that it has purchased the high-performance net Read more…

By Doug Black

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This