Tesla GPU Accelerators Hit the Gas with GPU Boost

By Timothy Prickett Morgan

November 18, 2013

Nvidia is rolling out a new top-end GPU accelerator, called the Tesla K40, that has both more processing capacity and more memory than the current K20X accelerator that is popular in high-end clusters – particularly those that need double precision floating point math.

The upgrade is offering significant performance improvements through the activation of more cores on the GPU and also through a new GPU Boost mode that lets the CUDA cores overclock.

It has been common practice to have a mid-life upgrade in the Tesla line as yields improve in the processes that Nvidia’s chip fabrication partner, Taiwan Semiconductor Manufacturing Corp, uses to etch the GPU chips. At some point, the yields are high enough that more of the CUDA cores on the chip can be turned on, which is usually not possible at the beginning of the product cycle. For instance, the original “Fermi” Tesla M2070 coprocessors, which debuted at SC09 in November 2009, had a GPU that had 512 CUDA, but only 448 of them were activated. They ran at 1.15 GHz. In May 2011, when the yields were better and the chip making processes at TSMC were refined, all of the 512 cores were fired up in the Tesla M2090 GPU coprocessor and clock speeds were edged up to 1.3 GHz.

nvidia-tesla-k40The Tesla K40 is a similar kind of mid-life upgrade for the “Kepler” family of GPUs that are focused at the high end where both single-precision and double-precision floating point performance matter. With the Kepler design, Nvidia added a lot more CUDA cores to the GPU chip and cut the clock speeds in half compared to the Fermi chips, and this allowed for the GPU to do a lot more work and still stay within the 225 watt to 235 watt thermal envelope that a discrete GPU coprocessor card has to live within.

As you can see from the table below, the top-end Tesla K20X GPU accelerator that debuted in November 2012 at SC12 had 2,688 cores running at 732 MHz; it also had 6 GB of GDDR5 graphics memory for the GPU to use as it does its data crunching. It plugged into a PCI-Express 2.0 slot and delivered performance of 3.93 teraflops at single precision and 1.31 teraflops at double precision.

nvidia-tesla-k20x-versus-k40

With the K40 GPU accelerator, the number of CUDA cores is increased to 2,880 (up 7.1 percent), the clock speed is nudged up to 745 MHz (up 1.8 percent), and the GDDR5 memory is 12 GB (double of the K20X card). The memory bandwidth is the K40 is, at 288 GB/sec, is 15.2 percent higher than in the K20X. The end result is that the base Tesla K40 can hit 4.29 teraflops at single precision and 1.43 teraflops at double precision, which is a 9.2 percent performance bump for both.

The K40 card plugs into a PCI-Express 3.0 x16 slot, which can handle roughly twice the I/O bandwidth as the prior PCI-Express 2.0 x16 slot used with the K20X. PCI-Express 3.0 slots are supported on the past “Sandy Bridge” and current “Ivy Bridge” Xeon E5 processors from Intel; AMD has not yet delivered an Opteron processor that supports PCI-Express 3.0, and this is one reason why its prospects in high performance computing have dimmed in recent years.

But that is not all you get. With the new GPU Boost mode, all of the cores can ratchet up their speed to either 810 MHz or 875 MHz to push the floating point performance higher at those times that the server enclosure in which the Tesla cards are slotted has the thermal headroom to let them run a little hotter.

Sumit Gupta, general manager of the Tesla Accelerated Computing business unit at Nvidia, tells HPCwire that the Tesla GPUs have very sophisticated mechanisms to keep the GPU from overheating, but the algorithms behind these throttles assume a worst-case scenario, even when the GPU is not actually burning that much electricity and generating that much heat. Unlike Turbo Boost on X86 server processors, which only lets one core accelerate to a much higher speed when other cores are relatively idle, the GPU boost feature ramps up the clocks on all of the CUDA cores to speed up their work. While Turbo Boost is automatic, GPU Boost has to be invoked, and this is done on purpose. “In a cluster, you need uniform performance across the nodes, and doing it this way is better than having each GPU invoke boosting itself.”

It is not clear how much of the performance boost with the Tesla K40 card is due to the doubling of the GDDR5 memory. But what is clear is that this expanded memory makes the Tesla K40 more applicable to certain workloads than its predecessors. Larger memories are needed for fluid dynamics, seismic analysis, and rendering workloads, just to name three.

“The datasets on some applications are so large that we have actually been limited in many ways in some markets because of the size of the memory,” explains Gupta. “This opens us up to most of the market now.”

So how much extra performance does the K40 provide compared to the K20X? The answer, as usual, is that it depends on the code. But here are some test results that Nvidia ran on some popular applications to give you an idea:

nvidia-tesla-k20x-versus-k40-performance

The incremental gains in moving from the K20X to the K40 are what you would expect from the feeds and speeds above, but what is immediately obvious is that GPU Boost really gooses the performance of applications. Anywhere from 20 to 40 percent, according to that chart, compared to the baseline K20X GPU.

Nvidia is shipping the Tesla K40 GPU coprocessors now, and expects for its server partners to embed them inside of their machines in the coming months. ASUS, Bull, Cray, Dell, Eurotech, Hewlett-Packard, IBM, Inspur, SGI, Sugon, Supermicro, and Tyan are all planning to use the K40 in their systems, and the zippy Tesla cards will also be available through Nvidia resellers. Nvidia does not provide pricing for any of its Tesla coprocessors because it does not sell them directly to consumers.

A number of supercomputer facilities are already getting their hands on the new Tesla K40 cards, including CSC Finland, the Texas Advanced Computing Center, CEA France, and Swinburne University of Technology. Gupta says that TACC will be deploying the K40 coprocessors in its “Maverick” visualization and data analytics system and expects to have it operational by January of next year.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Geospatial Data Research Leverages GPUs

August 17, 2017

MapD Technologies, the GPU-accelerated database specialist, said it is working with university researchers on leveraging graphics processors to advance geospatial analytics. The San Francisco-based company is collabor Read more…

By George Leopold

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Centers (IPCCs) has resulted in a new Big Data Center (BDC) that Read more…

By Linda Barney

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last week the cloud giant released deeplearn.js as part of that in Read more…

By John Russell

HPE Extreme Performance Solutions

Leveraging Deep Learning for Fraud Detection

Advancements in computing technologies and the expanding use of e-commerce platforms have dramatically increased the risk of fraud for financial services companies and their customers. Read more…

Spoiler Alert: Glimpse Next Week’s Solar Eclipse Via Simulation from TACC, SDSC, and NASA

August 17, 2017

Can’t wait to see next week’s solar eclipse? You can at least catch glimpses of what scientists expect it will look like. A team from Predictive Science Inc. (PSI), based in San Diego, working with Stampede2 at the Read more…

By John Russell

Microsoft Bolsters Azure With Cloud HPC Deal

August 15, 2017

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capabilities to the cloud. Terms of the acquisition were not disclosed. Read more…

By George Leopold

HPE Ships Supercomputer to Space Station, Final Destination Mars

August 14, 2017

With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system i Read more…

By Tiffany Trader

AMD EPYC Video Takes Aim at Intel’s Broadwell

August 14, 2017

Let the benchmarking begin. Last week, AMD posted a YouTube video in which one of its EPYC-based systems outperformed a ‘comparable’ Intel Broadwell-based s Read more…

By John Russell

Deep Learning Thrives in Cancer Moonshot

August 8, 2017

The U.S. War on Cancer, certainly a worthy cause, is a collection of programs stretching back more than 40 years and abiding under many banners. The latest is t Read more…

By John Russell

IBM Raises the Bar for Distributed Deep Learning

August 8, 2017

IBM is announcing today an enhancement to its PowerAI software platform aimed at facilitating the practical scaling of AI models on today’s fastest GPUs. Scal Read more…

By Tiffany Trader

IBM Storage Breakthrough Paves Way for 330TB Tape Cartridges

August 3, 2017

IBM announced yesterday a new record for magnetic tape storage that it says will keep tape storage density on a Moore's law-like path far into the next decade. Read more…

By Tiffany Trader

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

August 1, 2017

With its Radeon “Vega” Instinct datacenter GPUs and EPYC “Naples” server chips entering the market this summer, AMD has positioned itself for a two-head Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Leading Solution Providers

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This