HP Removes Memristors from Its ‘Machine’ Roadmap Until Further Notice

By Tiffany Trader

June 11, 2015

One year after Hewlett-Packard launched its ambitious “this will change everything” project called “The Machine,” the company is making some concessions to its initial vision, something it says is necessary in order to deliver a working prototype by next year.

Announced with great fanfare at last year’s HP Discovery event, the Machine was to be a reinvention of computing for the data era. It was to be special in every way — specialized cores, a purpose-built open source operating system optimized for non-volatile systems, and the centerpiece: memristor non-volatile memory, a special kind of resistor circuit that functions as both storage and memory.

Now some of that specialness is being put on hold in favor of a more conventional approach. The memristor is the main sticking point; the technology has come a long way under HP’s research arm, but still isn’t economically viable for volume production.

“We way over-associated this with the memristor,” Mr. Fink said in an interview with New York Times writer Quentin Hardy. “We’re doing what we can to keep it working within existing technology.”

In that vein, HP will use DRAM memory for its prototype, and will convert the shared memory pool to non-volatile memory, for example phase change memory, in future versions.

Memristors are still on the table and HP is aiming to have them inside the system when it makes its market debut five years from now.

HP the Machine one-node mechanical mockup 2015

A mechanical mockup of the prototype was on display at last week’s Discovery conference in Las Vegas. Next year, HP expects to reveal a working rack with 320 TB of “main memory” (240 TB shared memory plus 80 TB local to the compute node), 2,500 CPU cores, and an optical backplane. It will run a version of Linux rather than a customized operating system.

According to Moor Insights’ Paul Teich, the compute node for the proof-of-concept will employ an off-the-shelf ARMv8-based SoC, while future prototypes will support other processor types.

Specialized processing was one of the hallmarks of the original announcement. The right compute for the right workload would make it possible to achieve a factor of six times performance increase using 80 times less energy, HP said a year ago. Since repositioning the Machine as a “memory-driven computer architecture” last week, the messaging has focused more on the democratization of fast memory and less on processing power. While power-efficient memory is crucial for reaching computing milestones, such as exascale, it was the combining of component technologies into a single project that made the Machine such a radical departure from the status quo.

“A revolutionary new computer architecture…this changes everything,” was how company CEO Meg Whitman characterized this confluence.

Despite the scaled-down plans, HP Labs Deputy Director Andrew Wheeler insists “it’s been a great first year” full of “significant progress on all fronts.”

“The primary objective for next year is to deliver that initial working prototype of the Machine. This is important to us so we can use that platform to continue our research as well as to enable internal development teams and partners so they can advance our memory-driven computing architecture,” said Wheeler.

HP Machine stack visual 2015

Speaking at Discover 2015, Sarah Anthony, systems research project manager at HP, addressed the Machine’s flattened memory architecture as she pointed to the mechanical mockup. “Here in this one node volume, we have terabytes of memory and we have hundreds of gigabits per second of bandwidth off the node, and that’s really important because we’ve changed what I/O is. It’s not I/O, it’s a memory pipe,” she said.

“It’s going to provide a great foundation for ultra-scale analytics, but it has a significant impact on the system software. If you think about it, the essential characteristics of the Machine are that you have this massive capacity in terms of memory, tremendous bandwidth and very low latency. This is going to cause us to make modifications in the operating system and the software system on top of that,” continued Rich Friedrich, director of Systems Software for the Machine at HP.

For lots more on HP’s design plans, check out the Discovery 2015 panel presentation “HP Labs presents a peek under the hood of the Machine, the future of computing,” available in full below:

In another HP Labs presentation titled “Reimagining systems and application software for The Machine,” Principal Researcher at HP Kimberly Keeton covers the defining features of the Machine and explores the implications for systems software, programming models and applications. Also included is an overview of the Machine’s “shared something” approach, which represents a middle ground between shared everything and shared nothing models.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire