NVIDIA Works On CPU Co-Dependency Issues with Kepler GPU

By Michael Feldman

May 22, 2012

With Intel’s manycore MIC coprocessor looming on the horizon, NVIDIA’s is counting on its upcoming K20 Tesla to retain its dominance in the HPC accelerator marketplace. And while Intel has shared few technical details about its upcoming Knights Corner MIC, NVIDIA has conveniently provided a 24-page white paper (PDF) describing the inner workings of the GK110, the GPU that will power the K20 card for supercomputers.

If you’re a GPU programmer and like to get intimate with the silicon, or are just curious about where NVIDIA is heading with GPU computing, the GK110 paper should be on your summer reading list. It contains a nice description of the GK110 architecture and goes into some depth on the new features that this high-end Kepler brings to the HPC table.

As we described in our Kepler launch coverage last week at the GPU Technology Conference, the big new features in the architecture are Hyper-Q and Dynamic Parallelism. Both are changes that aim to relieve the CPU-GPU bottleneck, enabling the GPU to be better utilized for continuous processing, and freeing up the CPU for more mundane serial tasks. Those two features, however, are only available in the supercomputing-grade GK110, not the GK104 that powers the less powerful K10 card.

To recap, Hyper-Q allows the GPU to execute up to 32 MPI processes, CUDA streams, or threads at the same time. The Fermi GPU could only manage a single task at a time, which limited how much true parallelism the application could attain, and, in many cases, how much of the GPU could be utilized at any particular moment. Hyper-Q should automagically speed up a lot of existing CUDA applications without the need for any source code changes.

Dynamic parallelism, on the other hand, will require some source tweaking for existing GPU code, since it enables programmers to explicitly place more of the application on the graphics chip. It basically allows the GPU to generate work on its own, without having to rely on the CPU to keep feeding it. With dynamic parallelism, a kernel can now launch another kernel, enabling recursive and nested execution. For codes not yet ported to GPUs, this is good news, since this style of programming is a much more natural way to write applications.

Along those same lines is GPUDirect, a hardware/software-enabled feature that allows GPUs to talk to one another directly as peers, bypassing the CPU entirely. GPUDirect was present in Fermi, but the new Kepler has additional support that further lessens its reliance on the CPU. Using this feature, a GPU would be able to go through the NIC and exchange data with other GPUs on the network without CPU buffering in main memory. It also enables other PCIe attached devices, like SSDs, to directly access GPU data.

The NVIDIA engineers have also included some other tweaks to support greater application complexity. One of these is quadrupling the register count per thread compared to the Fermi architecture (from 63 to 255). Routines that do a lot of register spilling to memory because they have to deal with so many variables, like those in quantum chromodynamics, could see some pretty significant speed-ups, according to NVIDIA.

The GK110 also adds an extra 48K read-only data cache per multiprocessor for local functions. The new GPU also doubles the L2 cache capacity, to 1,536 KB, which helps data-dependent codes like physics solvers, ray tracing, and sparse matrix multiplication. This is all in addition to the 64 KB of multiprocessor memory (to divide between L1 and shared data) that Kepler inherited from Fermi, but which now supports more bandwidth for large reads.

All of this is geared to boost application performance in a big way. Here though, the paper hedges on the specifics, promising only “over 1 teraflop of double precision throughput.” NVIDIA is claiming the Kepler will deliver three times the performance per watt of the Fermi GPUs, but that doesn’t necessarily map to peak performance on a given chip. With regard to that metric, we should probably except a doubling or so of the Fermi Tesla’s 665 gigaflops for the top-of-the line Kepler hardware.

But application performance with the GK100 is expected to do much better that with the Fermi-generation GPUs. To a large degree that’s due to all the aforementioned CPU-offload features and other architectural tweaks. But a good chunk of the performance boost will be delivered via brute force, in the form of lots of cores.

The paper says the “full” GK110 implementation will have 15 streaming multiprocessor (SMX units), each of which has 192 cores. That would make the top Kepler a 2,880-core processor, which beats out the 512-core Fermi by a wide margin. But all those cores will be running at about half the clock speed as its predecessor. As the GK110 white paper explains:

For Kepler, our priority was performance per watt. While we made many optimizations that benefitted both area and power, we chose to optimize for power even at the expense of some added area cost, with a larger number of processing cores running at the lower, less power-hungry GPU clock.

The increased core count is enabled by a transistor shrink, in this case, TSMC’s 28nm process technology. In fact, the GK110 will be the largest processor ever built, at least the largest one that is not still sitting in a research lab somewhere. At 7.1 billion transistors, the GK110 is nearly twice the size of the new 4.3 billion transistor Radeon HD 7900 GPU from AMD. For some context, the new “Sandy Bridge” Xeon E5-2600 series CPUs are made up of less than 2.3 billion transistors.

There will also be two slightly smaller GK110 GPU parts, with 13 and 14 multiprocessors, respectively. Presumably the clock frequencies could be cranked up a bit on those if faster thread performance is desired, or down if lower wattage is the goal. In any case, the three GK110 variants suggests NVIDIA will offer a range of HPC products aimed at different price/performance/power points.

The first GK110 GPUs are expected to debut in the K20 Tesla cards in Q4. NVIDIA might be initially hard-pressed to ramp up volumes, especially since TSMC has a number of customers (AMD and Qualcomm, in particular) also vying for 28nm capacity. Supposedly though, NVIDIA chips are going to be priority at the foundry. Even so, such a big chip might still be a challenge for TSMC, from a yield perspective.

In any case, most, if not all of the early GK110s will likely end up in just two systems: the DOE’s Titan supercomputer at Oak Ridge National Lab and the NSF’s Blue Waters machine at NCSA. About 15,000 of the GK100s are expected to go into the Titan super, while the more conservative Blue Waters system will be equipped with around 3,000 of new GPUs.

NVIDIA expects to sell a lot more of them that over the next two or three years, until the “Maxwell” GPU kicks in. That architecture is expected to encompass CPU-GPU integration, the so-called “Project Denver” work that glues a 64-bit ARM CPU onto a CUDA GPU. As such, it will represent an architectural watershed for NVIDIA, but one that Kepler laid the groundwork for.

Kepler, and the GK110 in particular, is NVIDIA’s most general-purpose processor to date. By reducing the dependency of the GPU on the CPU, and making the GPU more capable of supporting complex types of processing, NVIDIA is not just trying to make the two architectures equal peers, but to make the GPU the star of the show. If NVIDIA continues to pursue this architectural trend line, the CPU, while necessary, could be reduced to the role of an OS microcontroller: fielding interrupts, managing I/O, and scheduling jobs. The GPU, meanwhile, would be able to encompass the high-value application processing, which not only conforms to NVIDIA’s philosophical bent, but also its business strategy.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be natively integrated into four of the world's most advanced qu Read more…

Computing-Driven Medicine: Sleeping Better with HPC

September 10, 2024

As a senior undergraduate student at Fisk University in Nashville, Tenn., Ifrah Khurram's calculus professor, Dr. Sanjukta Hota, encouraged her to apply for the Sustainable Research Pathways Program (SRP). SRP was create Read more…

LLNL Engineers Harness Machine Learning to Unlock New Possibilities in Lattice Structures

September 9, 2024

Lattice structures, characterized by their complex patterns and hierarchical designs, offer immense potential across various industries, including automotive, aerospace, and biomedical engineering. With their outstand Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, integrated, and secured data. Now scientists working at univer Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently posted the following on X/Twitter: "This weekend, the @xA Read more…

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with performance benchmarks. In the first paper, Understanding Data Mov Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, in Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Shutterstock 1897494979

What’s New with Chapel? Nine Questions for the Development Team

September 4, 2024

HPC news headlines often highlight the latest hardware speeds and feeds. While advances on the hardware front are important, improving the ability to write soft Read more…

Critics Slam Government on Compute Speeds in Regulations

September 3, 2024

Critics are accusing the U.S. and state governments of overreaching by including limits on compute speeds in regulations and laws, which they claim will limit i Read more…

Shutterstock 1622080153

AWS Perfects Cloud Service for Supercomputing Customers

August 29, 2024

Amazon's AWS believes it has finally created a cloud service that will break through with HPC and supercomputing customers. The cloud provider a Read more…

HPC Debrief: James Walker CEO of NANO Nuclear Energy on Powering Datacenters

August 27, 2024

Welcome to The HPC Debrief where we interview industry leaders that are shaping the future of HPC. As the growth of AI continues, finding power for data centers Read more…

CEO Q&A: Acceleration is Quantinuum’s New Mantra for Success

August 27, 2024

At the Quantum World Congress (QWC) in mid-September, trapped ion quantum computing pioneer Quantinuum will unveil more about its expanding roadmap. Its current Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Leading Solution Providers

Contributors

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire