Analyst Weighs In on 64-Bit ARM

By Michael Feldman

August 16, 2012

In a recent report in Real World Technologies, chip guru David Kanter dissects the new 64-bit ARM design (ARMv8) and what it might mean to the IT landscape. His take on the architecture is almost uniformly positive, noting that not only did the designers manage to develop an elegant instruction set that was backwardly compatible with the existing ISA, but they also took the extra step to jettison a few of the poorly designed features of the 32-bit architecture.

Announced in October 2011, 64-bit ARM is the biggest makeover the processor architecture has received in its 26-year history.  The first implementation in 1985, ARM1, was a 32-bit chip developed for Acorn Computers (ARM = Acorn RISC Machine). Although the architecture never caught on in the PC biz, it’s simple, low-power RISC design made it a natural for embedded/mobile SoC applications and microcontrollers.

While the server and personal computer world moved on to 64 bits, ARM was safely ensconced in the embedded/mobile space where 32 bits of addressing (basically 4 GB) was plenty.  But now that devices like tablets and other mobile gadgets are pushing up against this limit, a larger address reach will soon become necessary. Also, the expanded address reach will allow ARM chips for the first time to enter the server market and compete against the x86, the processor architecture that has dominated the datacenter for decades.

In a sense, ARM is trying to duplicate the success of the x86 when it made its own jump from 32 to 64 bits in 2000.  In that case, the 64-bit Intel Xeons and AMD Opertons ended up displacing a lot of their high-end RISC-based competition  — especially SPARC and Power. If 64-bit ARM ends up cutting into the x86 share of the server market, it would be fitting revenge for the RISC faithful.

As mentioned, before the most critical enabling feature for 64-bit ARM is extending the address space. Although 64 bits could reach 16 exabytes, there’s little application demand to access data at that scale.  For the time being, only 48 bits will be used to form an address, which gives software a 256 GB address reach.  Presumably, additional address bits can be tacked on in the future as applications scale up.

With the ARMv8 design, integer and floating point structures are also being enhanced, with all general purpose registers being extended to 64 bits.  The floating point design has been tweaked to support IEEE754-2008, including additional  instructions to make the architecture compliant with the standard.

For vector operations, the changes are more extensive. In the 32-bit spec, the SIMD design (known as NEON) already contained 32 64-bit registers, which could be aliased to 16 128-bit pseudo-registers.  For the 64-bit design, that’s been extended to 32 128-bit registers, with the lower half being used if only 64 bit values are needed.  Not only does that double the capacity of the vector unit, it makes for a somewhat cleaner arrangement. The SIMD design also adds full IEEE support and double precision floating point operations.

Curiously missing form ARMv8 is multi-threading support, a feature common to all other major server CPUs — x86, SPARC, Power, and even HPC processors like the Blue Gene/Q ASIC (PowerPC A2). Kanter speculates that the ARM designers decided to forego multi-threading for now since it is notoriously difficult to validate, and the new design already encapsulated a lot of changes.  Although the jury is still out on the aggregate benefit of this feature, for certain classes of software, the lack of multi-threading support could turn out to be a decided disadvantage.

Overall though, Kanter likes what ARM developers have come up with, which he says is “clearly a sound design that was well though out and should enable reasonable implementations.”  As he notes though, there are currently no chip implementations around to judge the the architecture’s performance in the field.

But within a couple of years, we should see multiple 64-bit ARM SoCs at various segments of the market — everything from high performance computers to workstations. Applied Micro already has an FPGA implementation of ARMv8, which the company unveiled in October 2011 and subsequently demonstrated running on an Apache web server. Samsung, Qualcomm, Calxeda, Microsoft, Marvell and NVIDIA have either stated plans to implement a chip or have already bought licenses. At this point, NVIDIA is the only one that has specifically talked about a 64-bit ARM implementation (Project Denver) aimed at HPC, but Calxeda also has high performance computing on its radar.

Samsung is a particularly interesting entrant to the market. The Korean firm is mostly in the consumer electronics business and its involvement in the server space is currently confined to supplying DRAM and flash components. But Samsung would make a formidable competitor against Intel in the server chip arena if the company funneled its resources there. While Intel has more than twice Samsung’s revenue today, the latter company is growing at a much faster rate.

That led industry analyst firm IC Insights to project that Samsung would eclipse Intel as the world’s largest supplier of semiconductor parts by 2014. Coincidentally, that’s that same year the company plans to roll out its first 64-bit ARM server chips. As Kanter concluded: “Certainly, the next few years should be very interesting.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is built to run artificial intelligence (AI) workloads and, as Read more…

By Tiffany Trader

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Leading Solution Providers

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This