Revenge of the SMP?

By Michael Feldman

April 27, 2007

Lately it seems like I've been talking with people who see the multicore phenomenon as something of a cluster-buster. One of those people is Mike Hoskins, CTO of Pervasive Software, a company that develops database software technologies. Hoskins' reading of the tea leaves suggests that the trajectory of multicore processors is on a collision course with cluster computing. Essentially, the rationale is that as cores multiply on the chip, it makes more sense to build and program scaled-up SMP machines than scaled-out clusters.

Hoskins hopes this is the case. In general, his world of data-intensive computing has never been comfortable with the cluster and grid model. The technology heritage in this arena is mostly C and Java apps running on mainframes or big servers. Clusters and MPI programming are seen as fringe technologies. The clusters themselves are hard to deploy and administrate, while the programming model is primitive and not well-supported for commercial application development.

For Hoskins, the path of least resistance to bring data-intensive and compute-intensive computing into the Java universe is through SMP architectures. This week's feature article on Pervasive's Java framework looks at how cluster and multicore technologies are viewed from someone outside the traditional HPC community.

Hoskins tells a convincing story. Although the average multicore processor today is a dual-core chip, soon that will be quad-core. If we just follow a Moore's Law curve, a standard general-purpose processor will have 16 cores by the end of the decade. If you put four of those processors in an SMP box, you essentially have a machine that matches or exceeds the performance of most workgroup and departmental clusters today.

Since the workgroup and departmental systems are the fastest growing segment in HPC, a switch to SMP boxes would change the profile of the market fairly quickly. If multicore SMP systems cannibalize the low end of the cluster market, it will force clusters into the higher-end (but lower volume) capacity computing space.

It's no coincidence that vendors like Azul and Sun, who are pushing the multicore envelope more than most, are also big proponents of scaled up SMP boxes. Azul's 48-core Vega 2 chip is being used in their 768-way Compute Appliance, while Sun's 8-core, 32-thread UltraSPARC processor populates their T1000 and T2000 servers. And just last week, Sun announced first silicon for their new 16-core Rock processor. Since quad-core currently represents the upper end of x86 processors, more general-purpose, scaled-up machines are still on the drawing board. But SGI's f1240 server already offers a 48-core x86 SMP, which can be expanded up to 96 cores.

Beyond 2010, we can extrapolate core doublings into a manycore future, eventually squeezing capacity clusters up against supercomputing capability systems, until … poof, they disappear, never to be heard from again.

Or maybe not. Just as scaling nodes in a cluster has its problems, so does scaling cores and processors in a machine.

The biggest impediment to scale-up is the memory wall. Since SMP systems, by definition, share a common memory space, the data bandwidth into each processor, and then each core, is limited by memory system performance. As more cores compete for memory, each one has proportionally less bandwidth available to it. Memory technology isn't standing still, but RAM has only been doubling in speed every 10 years, well behind the 18-month Moore's Law doubling rate that is driving the multicore phenomenon. Technologies on the horizon to speed up memory access include 3D chip stacking (IBM), on-chip photonics (Intel) and proximity communication (Sun Microsystems). Whether any of these proves to be a practical solutions remains to be seen. But in the short term, the memory wall will act as a barrier to unconstrained SMP scale-up.

In addition, as you add more cores and processors to a system, system architects add additional RAM to keep computational performance balanced with memory capacity. But once you get up into terabytes of RAM, you have to start worrying about the likelihood of hard errors occurring with some frequency. Technologies such as memory scrubbing can deal with this, but the system cost is increased.

But the really big unknown is future HPC application demand for more performance. If applications that now run on low-end clusters don't change appreciably, the equivalent code will run on SMP workstations in a few years. But if those applications are limited by performance, they're likely to migrate to more powerful clusters as the nodes and interconnects ramp up in power.

Certainly in the bigger problems sets in HPC, like climate modeling or other types of large-scale simulations, the demand for more performance is almost insatiable. As you increase the time scales or resolutions of many models, the workloads scale relatively easily. But for commercial HPC applications, it's a mixed bag. Some problems are domain limited, for example, the genomic analysis of a bacterial pathogen. These types of applications don't scale. But many types of engineering simulations can scale as easily as climate models.

One thing did become clear to me in talking to Hoskins: There are users out there who would love to move into the high performance computing world, but are unwilling to migrate to cluster or grid computing because of the difficulty of the software model and the complexity of the system. For these people, multicore SMP systems are the answer.

——

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire