HPE’s Memory-centric The Machine Coming into View, Opens ARMs to 3rd-party Developers

By Doug Black

May 16, 2017

Announced three years ago, HPE’s The Machine is said to be the largest R&D program in the venerable company’s history, one that could be progressing toward the epic grandeur envisioned by HP (then, HPE now) starting in 2014. Certainly, senior HPE managers have high ambitions for the new architecture: nothing less than a new paradigm, called Memory-Driven Computing (MDC), that puts memory, not processing, at the center of the computing platform.

HPE positions The Machine as the architecture for exascale-class performance by the time it’s commercially available in 2019 or 2020, which is roughly the timeframe the Department of U.S. Energy’s Exascale Computing Project has established for delivering an exascale machine. Along with completion of the new platform, HPE hopes will come a broad ecosystem of complementary development. The prototype unveiled today contains an oceanic 160 terabytes (TB) of memory, capable (according to HPE) of simultaneously working with the data held in every book in the Library of Congress five times over – or approximately 160 million books.

“It has never been possible to hold and manipulate whole data sets of this size in a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing,” the company said in its announcement.

HPE’s Kirk Bresniker

For all the promise of The Machine, today’s announcement is not startling. HPE has regularly issued updates on The Machine’s development, most recently last November, when the company said it had successfully demonstrated an MDC proof-of-concept prototype. Today’s news: the prototype operates at scale, Kirk Bresniker, Fellow/VP and chief architect of Hewlett Packard Labs, told HPCwire sister publication EnterpriseTech.

“We wanted to build a system big enough to hold really interesting problems in a way that had never been done before,” Bresniker said. “So we somewhat arbitrarily picked a scale – 160 TBs of memory on a memory fabric. Compare that to the paltry 2 GBs of memory on a typical laptop, that’s 80,000 times bigger. No one’s ever constructed a memory system that large before.”

Regarding computation power, the prototype has an optimized Linux-based operating system (OS) running across 40, 32-core ThunderX2, Cavium’s flagship second generation dual socket-capable ARMv8-A workload optimized System on a Chip.

In addition, The Machine has a Photonics/Optical communication links, including the new X1 photonics module, which HPE said are online and operational. And it has software programming tools designed to take advantage of abundant persistent memory.

Bresniker said Memory-Driven Computing has great potential because the architecture curtails so much of the data movement required for traditional computing.

“Rather than have a GPU hanging off of a PCI Express link – and you have to manage the data back and forth from the general purpose processor out to the GPU and back again – because I have a memory fabric that has an open interface I can place those acceleration resources directly, in direct communications, on the memory fabric,” he said.

Today’s announcement marks a transition beyond internal proof-of-concept.

“We’ve moved on from proving out that each individual piece is working,” said Bresniker, “to the point where now…we can do the handoff from the teams working on the hardware, the firmware, the operating system, to the application development teams, to begin to flex their minds and muscle around the ramifications for having this kind of a platform available to them for the first time.”

From the start of this project, Bresniker said, HPE has taken the somewhat unconventional and risky approach of sharing information about the new platform so that third parties can do their development work based on The Machine specifications.

“We always knew this had to be bigger than us, that this is a conversation that has to happen across the industry,” he said. “That’s why we started to have the communications so early. When we announced this in 2014, the prototype we’re showing off now was essentially a block diagram scrolled on my white board here in Palo Alto. But we wanted to have the conversation early because we wanted to work with the open source development communities, we needed to engage with them, we needed to engage with our software partners that we’ve traditionally had. We needed to engage with our component supply chain, all the memory, communications and computation components that need to understand how they fit into this memory fabric.”

Based on the current prototype, HPE said it expects the architecture could scale to an exabyte-scale single-memory system and, beyond that, to a nearly-limitless pool of memory – 4,096 yottabytes. For “context, that is 250,000 times the entire digital universe today,” HPE said in its announcement. “With that amount of memory, it will be possible to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google’s autonomous vehicles; and every data set from space exploration all at the same time – getting to answers and uncovering new opportunities at unprecedented speeds.”

“Cavium shares HPE’s vision for Memory-Driven Computing and is proud to collaborate with HPE on The Machine program,” said Syed Ali, president and CEO of Cavium Inc. ”HPE’s groundbreaking innovations in Memory-Driven Computing will enable a new compute paradigm for a variety of applications, including the next generation data center, cloud and high performance computing.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and received a patent for a "processor design, which allows rep Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communicat Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitiv Read more…

By John Russell

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia, Partners Announce Several V100 Servers

September 27, 2017

Here come the Volta 100-based servers. Nvidia today announced an impressive line-up of servers from major partners – Dell EMC, Hewlett Packard Enterprise, IBM Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This