Is the Cell Processor Poised for HPC Stardom?

By Michael Feldman

June 2, 2006

Interest in the Cell processor by the high performance computing community appears to be building rapidly. Last week's feature article on the proposed use of the Cell for HPC, “Researchers Analyze HPC Potential of Cell Processor,” generated a large response from our readers. In fact, it was the most downloaded article in this publication's history.

That's not too surprising. With its PowerPC scalar core controlling eight SIMD cores — the synergistic processing elements (SPEs) — the Cell represents the first commodity implementation of a high-performance multi-core heterogeneous processor. In the world of HPC, heterogeneity is seen by many as the next evolutionary step in computer architecture.

However the heterogeneous nature of the Cell is not conventional, in the supercomputing sense. The processor's scalar PowerPC core is used to control the SPE cores and manage the chip's memory hierarchy, while the SPEs themselves do the computation. There's no real division of heterogeneous workloads.

That's not to suggest that the Cell architecture isn't innovative. According to the Berkeley researchers, the three-tiered memory hierarchy, which decouples memory accesses from computation and is explicitly managed by the software, provides some significant advantages over typical cache-based architectures. In fact, the Cell's software-controlled memory system may be its most compelling technological feature, offering a powerful solution to memory latency when data access has some level of predictability.

The Wikipedia reference on the Cell processor offers another way to look at it: “In some ways the Cell system resembles early Seymour Cray designs in reverse.” The Wikipedia notes that the CDC 6600 used one fast processor to handle the math and ten slower systems to keep memory fed with data, while the Cell reverses the model by using the central processor to supply data to the eight math elements.

So how does this translate into an HPC solution? Overall, the impressive power and performance results that the researchers obtained with the Cell do appear to indicate a real potential for high performance computing. When comparing scientific benchmark codes that were run on the AMD Opteron, Intel Itanium 2 and Cray X1E processors, the Cell beat the Opteron and Itanium rather easily, the X1E, less so. The results show that the Cell was about 7 times faster than either the AMD or Itanium and was 15 times more power-efficient than the Opteron and 21 times more power-efficient than the Itanium. Pretty impressive.

The researchers went on to propose a “Cell+” architecture as a way to greatly enhance the architecture's 64-bit floating-point performance for scientific codes. Using this virtual processor, the performance and power-efficiency results more than doubled, when compared to the already blazingly fast Cell.

And, as pointed out by the authors of the research paper, the fact that the Cell will be mass-produced for the Sony PlayStation 3 platform makes it a tempting target for building affordable supercomputing systems. “Cell is particularly compelling because it will be produced at such high volumes that it will be cost-competitive with commodity CPUs,” state the authors.

For anyone in the HPC community, the idea of adopting a commodity architecture that got its start in another market segment should not be too hard to wrap your head around. When Intel introduced the x86 architecture in 1978, and went on to become the standard chip for desktop PCs, who thought it would end up in supercomputers? Even the IBM Blue Gene supercomputer is based on PowerPC chips, whose original habitat was in Apple desktop computers and embedded devices. In contrast, the processors that were specifically designed for high performance computing have struggled in the marketplace. Not because they didn't perform. It's just that the economic model to develop custom chips exclusively for HPC systems is rather tenuous. Just ask Cray or SGI.

So should HPC OEMs start building Cell systems to blow the chips off every other blade and cluster machine out there? Maybe, but it has to be for more than just bragging rights. The IBM Cell-based blade was unveiled this past February and is planned to be generally available in the third quarter of 2006. Mercury Computer Systems has sold several test systems to military and commercial customers, and plans to release its first production-quality Cell blades by the end of June. So there's certainly activity afoot.

But there is the matter of a software ecosystem to contend with. For the benchmark study, the Berkeley researchers admitted to using assembly level insertion to hand-code the algorithms. Obviously for production development, this is unacceptable. A Cell Broadband Engine Software Development Kit, including a compiler, is available from IBM. And with the release of kernel version 2.6.16 in March 2006, the Linux kernel now officially supports the Cell processor. But this is just the start. Many applications will have to be ported to provide a mature software environment.

And some have doubts that the architecture is a useful model for next-generation supercomputing. Here's a few sobering comments from the High-End Crusader:

     “The paper by Williams et al., 'The Potential of the Cell
     Processor for Scientific Computing', is guarded in its
     conclusions and cannot really be faulted. Nonetheless, its
     unintended consequence may be regressive, further retarding the
     emergence of novel computational paradigms upon which the future
     of high-end computing so critically depends.
     The paper needs to be put in perspective.
     A general-purpose parallel computer must adapt to many
     variations in an application, including granularity,
     communication regularity, and dependence on runtime data. For
     applications with simple static communication patterns, it is
     straightforward to algorithmically schedule/overlap
     communication and computation to optimize performance. In the
     Cell microarchitecture, the programmed scalar core both 1)
     issues nonpreemptive vector threads to vector cores, and 2)
     manages the flow of data between the Cell's off-chip local DRAM
     and the local SRAMs of individual vector cores; this is ideal
     for software-controlled scheduling/overlap, assuming that the
     programming effort can be amortized.
     Yet computing is also about parallel applications with dynamic,
     unstructured parallelism. Historically, the correct solution to
     this problem has been dynamic thread creation ('spawning')
     together with dynamic scheduling. We also need hardware support
     for synchronization and scheduling. The authors of the Cell
     paper are cleverly programming a software-controlled memory
     hierarchy to stream operands to a blindingly-fast vector
     processor. By orchestrating pre-communication from local DRAM,
     they _fill_ the vector-thread closures; they tolerate the
     latency to local DRAM by using long messages.
     Fine, I suppose. Even so, the better way to avoid the
     approaching train wreck in high-end computing is more progress
     on (heterogeneous) machines with agile threads, cheap
     synchronization, and low-overhead dynamic scheduling, which
     alone can deal with dynamic, unstructured parallelism. These
     machines will be heterogeneous in the deepest sense of the word.
     Software is a major challenge (see 'Heterogeneous Processing
     Needs A Software Revolution', forthcoming).
     Finally, sparse MV multiply normally requires random-stride
     access to the source vector 'x'. Are there hidden assumptions in
     this paper (perhaps matrix preconditioning) that allow DMA
     transfer of appropriate blocks of 'x' into local stores of
     vector cores? Is the Cell processor really being touted as a
     _general_ platform for sparse linear algebra?”

One interesting addendum to the story regards the Berkeley researchers' proposed Cell+ architecture, which is designed to enliven the processor's 64-bit floating-point performance. There actually may be an alternative approach for speeding up double-precision performance on this architecture. Jack Dongarra, director of the Innovative Computing Laboratory at the University of Tennessee, and his colleagues have devised software that implements 64-bit floating-point accuracy using 32-bit floating-point math. One of the processors they targeted was the Cell. The results of this work will be featured in an upcoming issue of HPCwire.


As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour


Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This