Is the Cell Processor Poised for HPC Stardom?

By Michael Feldman

June 2, 2006

Interest in the Cell processor by the high performance computing community appears to be building rapidly. Last week's feature article on the proposed use of the Cell for HPC, “Researchers Analyze HPC Potential of Cell Processor,” generated a large response from our readers. In fact, it was the most downloaded article in this publication's history.

That's not too surprising. With its PowerPC scalar core controlling eight SIMD cores — the synergistic processing elements (SPEs) — the Cell represents the first commodity implementation of a high-performance multi-core heterogeneous processor. In the world of HPC, heterogeneity is seen by many as the next evolutionary step in computer architecture.

However the heterogeneous nature of the Cell is not conventional, in the supercomputing sense. The processor's scalar PowerPC core is used to control the SPE cores and manage the chip's memory hierarchy, while the SPEs themselves do the computation. There's no real division of heterogeneous workloads.

That's not to suggest that the Cell architecture isn't innovative. According to the Berkeley researchers, the three-tiered memory hierarchy, which decouples memory accesses from computation and is explicitly managed by the software, provides some significant advantages over typical cache-based architectures. In fact, the Cell's software-controlled memory system may be its most compelling technological feature, offering a powerful solution to memory latency when data access has some level of predictability.

The Wikipedia reference on the Cell processor offers another way to look at it: “In some ways the Cell system resembles early Seymour Cray designs in reverse.” The Wikipedia notes that the CDC 6600 used one fast processor to handle the math and ten slower systems to keep memory fed with data, while the Cell reverses the model by using the central processor to supply data to the eight math elements.

So how does this translate into an HPC solution? Overall, the impressive power and performance results that the researchers obtained with the Cell do appear to indicate a real potential for high performance computing. When comparing scientific benchmark codes that were run on the AMD Opteron, Intel Itanium 2 and Cray X1E processors, the Cell beat the Opteron and Itanium rather easily, the X1E, less so. The results show that the Cell was about 7 times faster than either the AMD or Itanium and was 15 times more power-efficient than the Opteron and 21 times more power-efficient than the Itanium. Pretty impressive.

The researchers went on to propose a “Cell+” architecture as a way to greatly enhance the architecture's 64-bit floating-point performance for scientific codes. Using this virtual processor, the performance and power-efficiency results more than doubled, when compared to the already blazingly fast Cell.

And, as pointed out by the authors of the research paper, the fact that the Cell will be mass-produced for the Sony PlayStation 3 platform makes it a tempting target for building affordable supercomputing systems. “Cell is particularly compelling because it will be produced at such high volumes that it will be cost-competitive with commodity CPUs,” state the authors.

For anyone in the HPC community, the idea of adopting a commodity architecture that got its start in another market segment should not be too hard to wrap your head around. When Intel introduced the x86 architecture in 1978, and went on to become the standard chip for desktop PCs, who thought it would end up in supercomputers? Even the IBM Blue Gene supercomputer is based on PowerPC chips, whose original habitat was in Apple desktop computers and embedded devices. In contrast, the processors that were specifically designed for high performance computing have struggled in the marketplace. Not because they didn't perform. It's just that the economic model to develop custom chips exclusively for HPC systems is rather tenuous. Just ask Cray or SGI.

So should HPC OEMs start building Cell systems to blow the chips off every other blade and cluster machine out there? Maybe, but it has to be for more than just bragging rights. The IBM Cell-based blade was unveiled this past February and is planned to be generally available in the third quarter of 2006. Mercury Computer Systems has sold several test systems to military and commercial customers, and plans to release its first production-quality Cell blades by the end of June. So there's certainly activity afoot.

But there is the matter of a software ecosystem to contend with. For the benchmark study, the Berkeley researchers admitted to using assembly level insertion to hand-code the algorithms. Obviously for production development, this is unacceptable. A Cell Broadband Engine Software Development Kit, including a compiler, is available from IBM. And with the release of kernel version 2.6.16 in March 2006, the Linux kernel now officially supports the Cell processor. But this is just the start. Many applications will have to be ported to provide a mature software environment.

And some have doubts that the architecture is a useful model for next-generation supercomputing. Here's a few sobering comments from the High-End Crusader:

     “The paper by Williams et al., 'The Potential of the Cell
     Processor for Scientific Computing', is guarded in its
     conclusions and cannot really be faulted. Nonetheless, its
     unintended consequence may be regressive, further retarding the
     emergence of novel computational paradigms upon which the future
     of high-end computing so critically depends.
    
     The paper needs to be put in perspective.
    
     A general-purpose parallel computer must adapt to many
     variations in an application, including granularity,
     communication regularity, and dependence on runtime data. For
     applications with simple static communication patterns, it is
     straightforward to algorithmically schedule/overlap
     communication and computation to optimize performance. In the
     Cell microarchitecture, the programmed scalar core both 1)
     issues nonpreemptive vector threads to vector cores, and 2)
     manages the flow of data between the Cell's off-chip local DRAM
     and the local SRAMs of individual vector cores; this is ideal
     for software-controlled scheduling/overlap, assuming that the
     programming effort can be amortized.
    
     Yet computing is also about parallel applications with dynamic,
     unstructured parallelism. Historically, the correct solution to
     this problem has been dynamic thread creation ('spawning')
     together with dynamic scheduling. We also need hardware support
     for synchronization and scheduling. The authors of the Cell
     paper are cleverly programming a software-controlled memory
     hierarchy to stream operands to a blindingly-fast vector
     processor. By orchestrating pre-communication from local DRAM,
     they _fill_ the vector-thread closures; they tolerate the
     latency to local DRAM by using long messages.
    
     Fine, I suppose. Even so, the better way to avoid the
     approaching train wreck in high-end computing is more progress
     on (heterogeneous) machines with agile threads, cheap
     synchronization, and low-overhead dynamic scheduling, which
     alone can deal with dynamic, unstructured parallelism. These
     machines will be heterogeneous in the deepest sense of the word.
     Software is a major challenge (see 'Heterogeneous Processing
     Needs A Software Revolution', forthcoming).
    
     Finally, sparse MV multiply normally requires random-stride
     access to the source vector 'x'. Are there hidden assumptions in
     this paper (perhaps matrix preconditioning) that allow DMA
     transfer of appropriate blocks of 'x' into local stores of
     vector cores? Is the Cell processor really being touted as a
     _general_ platform for sparse linear algebra?”

One interesting addendum to the story regards the Berkeley researchers' proposed Cell+ architecture, which is designed to enliven the processor's 64-bit floating-point performance. There actually may be an alternative approach for speeding up double-precision performance on this architecture. Jack Dongarra, director of the Innovative Computing Laboratory at the University of Tennessee, and his colleagues have devised software that implements 64-bit floating-point accuracy using 32-bit floating-point math. One of the processors they targeted was the Cell. The results of this work will be featured in an upcoming issue of HPCwire.

—–

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire