Back to Basics on System Design

By Jud Leonard

April 6, 2007

“Make everything as simple as possible, but no simpler.” Albert Einstein

Natural science can be understood as the process of developing models that predict the behavior of the natural world, and we celebrate as great science the creation of the simplest models that give accurate predictions. Computer architecture seems, over the past decade or two, to have moved in the opposite direction, glorifying complexity at the expense of understandability and predictability, and even performance and usability. Highly speculative out-of-order superscalar microprocessors with north- and south-bridges, graphics adapters and raid controllers have evolved out of what was once the modest domain of hobbyists.

In and of itself, there's nothing wrong with the fact that the hardware and software of modern PCs are complex; they have adapted very successfully to the needs of home and office users, to the point of becoming nearly indispensable for civilization as we know it. But that complexity does make it next to impossible to create accurate models of their performance, and hence to design software that performs efficiently. And when your application is running for days or weeks at a time on hundreds or thousands of computers, you care about its efficiency.

To make matters worse, many of the evolutionary pressures on desktop computers are contrary to the needs of scientific and technical users. Low prices and high clock rates are real benefits, but high thermal dissipation, slow memory, sluggish I/O, high communication latencies, and limited memory access bandwidth have severely restricted the algorithmic options for parallelization of HPC codes and limited the scalability of those codes in production.

There is a real chicken-and-egg problem here that will take multiple generations of simplicity to fully resolve. Since personal computer hardware is now overkill for most users, the only people who care what is going on inside the chip are the designers, who have to assure that the circuitry is performing correctly. As a result, chips have lots of touch points for status information, but they are not designed to learn about the behavior of software algorithms. Worse yet, the individual chips that make up a contemporary cluster node have different, often contradictory, performance monitoring facilities.

As a result, today's scientific computer users have few tools that enable them to understand what their codes are doing, and hence are unable to articulate what they want their next computer to do differently. One manifestation is the Sisyphean task of creating benchmarks that fully encapsulate performance behavior. No sooner does a new benchmark come out than it is disavowed by various users as “not representative of what we do.” Until we have computers whose behavior is transparent, we will not have benchmarks that truly capture that behavior, and we won't have computer hardware that responds to that behavior because hardware designers will not have clear benchmark targets to shoot for.

Here are some of the steps that need to be taken.

It is critical to start getting computers with all the node logic designed together with a common performance monitoring architecture. Ideally, all the node circuitry would be on a single chip, but if that is not possible, it should at least be made up of consistent chips. Then users will be motivated to harvest the performance data.

And we need to think more broadly about what to do with the performance data that we collect. Today the state-of-the-art is to translate it into graphs and charts. But the data is likely to be full of patterns that do not necessarily reduce to charts. The biologists are showing us the value of using techniques like neural networks to look for these kinds of patterns. We still have much to learn in the area of “performance analysis analysis.”

While we wait for HPC hardware that users can more directly understand and critique, there are several architectural simplifications that are sure to be fruitful.

We need to simplify communications performance. Too many algorithms jump through too many hoops trying to avoid communications between processors that are “distant,” where distant means that the hardware takes a long time to get a message back and forth. This complexity is perhaps most visible in applications that use adaptive dynamic grids, forcing the optimization to be attempted on the fly. Where the performance curve of communications networks is flattened out, simplicity rules.

A second valuable simplification comes from clock coherence. When all the processor clocks in a system are synchronized within a fraction of a microsecond and, more importantly, are locked to a common reference, time references are consistent and monotonic throughout the system. This coherence promises a straightforward path to the elimination of major sources of OS noise, which saps the performance of so many clusters today.

A third area of simplification potentially comes in the way file systems are handled. We now have disk systems with thousands of spindles and parallel file systems that know how to use them. But these systems also have thousands of DIMMs that are often used in ad hoc ways to store global data. Why can't we reflect the power of a parallel file system back onto all of those DIMMs, so users can freely shift the location of their data without changing their program logic?

Parallel software development is neither easy nor simple, and until we come up with elegant new ways to think about concurrency, it will not be. To make progress in the meantime, we need machines that (a) run today's codes well, (b) are simple and transparent enough to permit successful debugging and tuning, and (c) provide sufficient compute, memory and communication resources to allow new algorithms to follow the natural expression of the programmer's intent.

Ultimately, parallel computing itself should simplify scientific programming since nature is inherently parallel. In order to establish the natural correspondence between parallel phenomena and parallel computation, however, we need to make sure that the computers we use are as simple as possible. But not simpler.

—–

Jud Leonard is a founder and the CTO of SiCortex, Inc, a recent entrant in the HPC market. His career in high performance computing has run from the IBM 1620 and 360 through Digital's PDP, VAX, and Alpha systems. He knows how complicated it can be keeping things simple.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire