Arm Yourselves for Exascale, Part 2

By Michael Wolfe

December 12, 2011

In Part 1, I advocated that we should explore using ARM-architecture mobile processors in HPC for three reasons: innovation (the marketplace will dictate that future innovation focus on mobile systems), federation (the ARM architecture is ubiquitous and available from many vendors), and customization (the mobile market has a strong history of custom parts).

In addition, the cost of an ARM processor is an order of magnitude lower than current commodity processors, and they can be built to consume less than 10 Watts per multicore processor. It’s worth noting that there is already significant movement in this direction.The Mont-Blanc project coordinated by the Barcelona Supercomputing Center, is already building and experimenting with a prototype cluster using ARM processors to explore the challenges.

However, moving to any new processor architecture is not an easy decision. There are challenges and missing pieces that need to be addressed before we can make the leap, but there are opportunities as well. Here, we explore challenges and opportunities in three areas: processor and system architecture, software, and economics.

Architectural Challenges and Opportunities

If we compare the architectural features of the high end ARM Cortex-A15 processors to the most common current HPC processors from Intel, AMD and IBM, we find many similar features. ARM processors support virtual memory (with small 4KB and large 64KB pages), a cache hierarchy, cache coherence across multiple cores, and a full set of integer and floating point registers and instructions. The high-end Cortex-A15 supports a modern superscalar, out-of-order execution pipeline. Some instructions commonly used in performance-sensitive applications to better manage the cache when processing large datasets, such as cache prefetch or nontemporal (noncaching) loads and stores, are not available in the current ARM instruction set. Many embedded processors are used in applications where floating point is unnecessary, but we should only consider fully functional processors. The table below compares high-end IBM, Intel and AMD processors to the Cortex-A15.

There are question marks for the ARM Cortex-A15 because no implementations are available yet, and the numbers may depend on the vendor and fab technology. The most striking differences are the lower core count and smaller cache size for the ARM Cortex-A15. A manufacturer could produce a chip with multiple quad-core tiles, effectively increasing total core count and cache size, but the cores in different tiles would not be cache coherent. Also, the ARM NEON SIMD instructions do not currently support double-precision floating point.

Current ARM processors, including the Cortex-A15, are 32-bit processors. One of the reasons to build exascale machines is to process very large datasets, and this will benefit from, if not demand, a true 64-bit processor. ARM processors support large physical memory, but that’s not the same as true 64-bit registers and instructions. There had been rumors of 64-bit ARM processors for the past year; last month, ARM disclosed details of the ARMv8 architecture, which supports both classical 32-bit ARM instructions, and a new 64-bit execution state with a true 64-bit instruction set, A64.

Importantly, the A64 NEON SIMD instructions support double precision, as well as full IEEE rounding modes, denormalized numbers, and NaNs. Products based on the 64-bit ARM architecture are still in the future, but Applied Micro Circuits Corp. demonstrated the first 64-bit ARM processor implemented on a Xilinx Virtex-6 FPGA. NVIDIA is also reportedly a lead partner for the 64-bit ARMv8 architecture.

ARM-based products are typically systems-on-chip, with variations in the ARM core used and in the selection of devices and interfaces included on the chip. This is both an opportunity and a challenge. One of the advantages of the ARM architecture is the wide selection of vendors supplying parts, so that’s an opportunity. However, each vendor will have a slightly different feature set. Today, when choosing between Intel and AMD, a system vendor or customer may consider the slight difference in instruction sets, cost, performance, maybe the difference in motherboard design or processor interface (quickpath vs. hypertransport), but otherwise the features are essentially the same. Between ARM suppliers, the features are potentially much different, making the selection process much more interesting.

ARM+GPU or (more generally) ARM+accelerator is a likely configuration for products aimed at HPC. Accelerator-based systems are becoming increasingly more prevalent, and there are several efforts addressing the programming challenges. Current accelerators are NVIDIA and AMD GPUs, and the future Intel MIC will compete directly with them. Now, Texas Instruments seems to be testing the HPC waters with a new multicore DSP. These all connect to the host on the PCI express bus, which although a relatively fast IO bus, is very slow relative to memory speeds. AMD is integrating stream processors (FKA GPUs) on the same chip as the processor; right now these are not targeting the highest performance, but the plan seems to be to move in that direction.

We should see more advantages for accelerated computing with tighter integration. But no one other than AMD can integrate on chip with AMD processors, and similarly for Intel. One could integrate an accelerator more closely to the processor on the AMD Hypertransport (which is open) or the Intel Quickpath (which is not), but we’ve seen little movement in that direction, in spite of AMD’s short-lived Torrenza initiative. However, ARM vendors will have more opportunities for tighter accelerator integration. NVIDIA’s Project Denver chips will have ARM cores integrated at some level with NVIDIA GPUs, for instance. Adapteva has announced multicore-architecture IP that could be produced as a standalone chip, or possibly included on chip with ARM cores or other devices.

It’s hard to compete with Intel’s silicon technology; arguably no one else has the resources to support advanced process technology at the same pace. While Intel is starting production of 22nm Ivy Bridge processors, targeting delivery in the first half of 2012, most other vendors are still producing microcontrollers at 32nm and 45nm feature sizes, or a 0.9 shrink of those. However, ARM is aggressively exploring future technologies, and is working with TSMC on the design of the Cortex-A15 in a 20nm process.

Using mobile processors such as ARM opens the door to new levels of innovation. IBM is building some of the world’s fastest computer systems out of relatively slow (1.6GHz) processors. The Blue Gene/Q design is a carefully managed balance of performance, power and cost, as was its predecessors. With a variety of ARM-architecture chip vendors, system architects will have even more opportunity (and challenge) to innovate and optimize system performance balanced with power, cost and features.

Software

The software story for ARM cores is both good and bad. Various operating systems are available for ARM architecture now, including several distributions of Linux and various real-time and mobile OSes; Microsoft has announced that it will support the next Windows version on the ARM architecture as well. It’s not clear what support is available for the variety of devices that we find in HPC, such as high-performance network interfaces or compute accelerators.

There are several good C and C++ compilers for ARM cores, including GCC and compilers from ARM Ltd., however the only Fortran available on ARM cores is GNU Fortran or Fortran-to-C preprocessors. As near as I can tell, there isn’t even an official Fortran ABI yet. Mathworks has some support for ARM architecture already. The ARM instruction set has special support for Just-In-Time compiled languages, such as Java, Python, and Perl. Other tools will be needed as well; debuggers are available, and Allinea just announced support for ARM-based products in support of the Mont-Blanc project.

Other software needs in the HPC space include optimized math (BLAS, LAPACK, more) and communication (MPI) libraries. Unoptimized versions of these can probably be generated directly from open source. At this point, there is a distinct lack of support for the ARM architecture by any third-party library or application vendor, such as ANSYS, CD-Adapco, Gaussian, LSTC, and others.

This is a classical rock-paper-scissors problem. The software vendor won’t invest in the port until there is sufficient demand, the demand won’t be there until enough customers have these machines, and customers won’t buy the machines until the libraries and applications are available. The minisupercomputer manufacturers of the 1980s all had exactly the same problem. Current HPC suppliers benefit by standardizing on just one or two instruction sets, hence creating sufficient aggregate demand to make the application vendors take notice. Solving this problem for the ARM ecosystem may require a large customer (read government lab) to take the lead.

However, a unique advantage for HPC is that much of the software is under continual development, and is regularly reconfigured, recompiled and rebuilt to improve the model or tune the performance. Many of these codes are community applications that are available in source form, and many more are developed in the same organization where they are used. As a result, the HPC space is not as dependent on binary compatibility or on migration of a large body of proprietary licensed applications. Unlike the general server market, many HPC users are ready to experiment and explore with just the right mix of operating systems and software development tools.

Economic

This brings us to the hard reality of the economics of ARM products, and customization in particular. For the most part, the mobile industry doesn’t deal in standard parts; it thrives on mass customization, producing the right part for each specific market. If we move to adopt ARM-based processors in HPC, we really want a chip with all the parts and interfaces we will use, and without the ones we won’t. Unfortunately, the volume required for really custom HPC parts just isn’t there.

Apple announced that the new iPhone 4S sold more than four million units over the opening weekend, worldwide. If I add up all the cores of the Top500 computers from November 2011, the sum is about 9.2 million; if I add up the processor chips, the sum is about 1.7 million. To get a chip vendor interested in producing a custom part for your market, you’ve got to demonstrate that you have enough volume to support the cost, and that your part is more profitable than any other part the fab plants might produce instead. Just producing the mask set can cost upwards of a million dollars. If you can demonstrate a volume on the order of a million chips (per year), you can get the interest of any of a number of vendors. But even if we replaced every processor chip in every computer in the Top500 list in a single year, we are just getting to the volumes required.

Given the interest by the server market for lower-power alternatives, there are likely to be several vendors supplying ARM-based parts tailored for enterprise servers, such as HP’s Redstone system, designed with Calxeda ARM architecture SoCs. HPC may end up in much the same situation it is in with x86: having to choose between two (Intel and AMD) or more (all the ARM IP licensees) vendors delivering chips with the same instruction set, but different cost / power / performance profiles. We would give up on-chip customization, but still benefit from any cost and power advantages.

The benefits of using mobile processors for HPC are power and cost. The power load of mobile processors is much lower than the high-performance Intel or AMD chips in most of the Top500 systems, typically well under ten Watts, instead of 50-100 Watts or more. Moreover, at sufficient volume, the cost of the chips themselves can be significantly lower, tens of dollars instead of hundreds or thousands of dollars. Some of this advantage is reduced if it takes multiple chips to reach the same performance as a single Intel or AMD processor, but unless that multiple is an order of magnitude, mobile processors still come out ahead. If the lower processor cost and power load results in a lower purchase price and lower cost of operation, the HPC market itself could grow.

Summary

It’s time to explore alternatives to current standard processors for HPC, and the ARM architecture appears to be the best, and probably the only, viable candidate. However, there are challenges and opportunities if we choose to go this route. Even with only two x86 vendors, there are instruction set, performance and interface differences; with the ARM architecture, the number of suppliers is quite a bit larger, and the differences will be magnified. However, the opportunities for innovation and integration of accelerators are quite exciting.

Just as it took years to get all the software we needed for HPC on our large-scale Linux clusters, it will take time to port to the ARM architecture, and convince the third-party software vendors to port their software. To make this economically feasible, we need to settle on a small set of common features and operating systems.

Finally, the economics may not play fully in our favor. We benefit from commodity x86 parts because most of these are sold in personal computers or workstations or servers. If we find standard ARM-based parts that fit our needs, we can enjoy the same benefits. But standard parts don’t allow for the customization that is another important potential benefit, and customization reduces the volume to a level that is no longer economically viable. However, the potential for lower purchase price and cost of operation is quite appealing, and may draw new customers to HPC. It may also force the mainstream vendors to focus more on lower cost and lower power parts, giving essentially the same benefits as a move to mobile processors. It will be an interesting next few years, as the HPC community explores alternatives on the way to Exascale.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire