GPGPU Finds Its Groove in HPC

By Michael Feldman

September 21, 2010

The NVIDIA GPU Technology Conference (GTC) kicked off on Tuesday amid a flurry of news that suggests the GPGPU HPC business is quickly moving into the mainstream. After just four years since the introduction of commercial-grade GPU computing, the technology has become firmly established and is poised to spill out across every application domain that has a need for data-parallel computing.

At this stage, GPU computing technology is especially apparent in the high performance computing arena. As of today, nearly all the major and minor OEMs that serve this market have announced NVIDIA GPU-equipped systems, including IBM, Cray, HP, SGI, Dell, Appro, T-Platforms, Bull, Supermicro and Tyan, among others. NVIDIA, which used to offer its own standalone Tesla GPU 1U box (the S-series products), has exited the server business, apparently passing that task off to server maker NextIO. As of today, NVIDIA is only providing Tesla cards (C-series) and modules (M-series) to the market.

Actually, that’s not quite accurate. One new Tesla product that was indirectly announced this week is the X2070, an M-series variant specifically designed for  challengingly-dense blade form factors. The new module takes up less than half the real estate of the M2070 board and, like it’s predecessor, has PCIe connectivity and uses a passive heat sink for cooling. The X2070 graphics chip is the same one used by the M2070, so has the same performance characteristics (515 DP gigaflops) and memory capacity (6 GB GDDR5).  NVIDIA has made no formal announcement of the X2070. The only reason we know about it at all is because Cray and T-Platforms this week announced future blades based on the new Tesla.

Cray will add the X2070 as an option on its XE6 (“Baker”) supercomputer line. “This is something we feel is mature enough to be in a scalable production supercomputer system,” said Barry Bolding, vice president of Cray’s products division. At this point, the company is not releasing any information about the new blade design or even the availability date for the new offering, although Bolding did say that they’re aligning their shipping dates very closely with the release of the X2070. In other words, they’ll be ready when NVIDIA comes through with the hardware.

Russian HPC cluster vendor T-Platforms had a lot more to say about its upcoming Tesla X2070-based blade, which they’re calling the TB2-TL. Known for designing extra-dense blades, T-Platforms has managed to stuff 16 blades, consisting of 32 X2070 GPUs and 32 Intel Xeon CPUs (low voltage L5600 “Westmere” processors) into a 7U chassis. To maximize bandwidth, each X2070 is routed through an Intel 5520 North Bridge chip and has a dedicated single port QDR InfiniBand chip. A single enclosure delivers 17.5 peak teraflops. Like the Cray XE6, the TB2-TL is aimed at large clusters and petascale supercomputers.

According to Alexey Nechuyatov, director of product marketing for T-Platforms, they’re looking into the possibility of offering the TB2-TL in the US, most likely through a system integrator. Despite the presence of established US-based vendors with GPU-equipped blades, like Cray, IBM, and Dell, Nechuyatov believes the unique design of its new GPU offering (not to mention aggressive price point of around $300K per enclosure) could find an audience in the states. “We might be outnumbered,” he said, “but never outgunned.” T-Platforms is planning to make the TB2-TL available for the Russian market in Q4 2010, and for Europe in Q1 2011.

Adding to the GPU blade rush is IBM, who will be adding Tesla M2070 GPUs to its popular BladeCenter offering. NVIDIA is especially happy to have IBM sign on for another Tesla-based product, having added the iDataPlex dx360 M3 back in May. That product paired two Intel CPUs with two Tesla M2050 GPUs in a rackmount server. The new BladeCenter variant uses the HS22 as the base blade, to which up to four M2070 expansion blades can be added. At its maximum configuration, up to 7 GPUs can be placed in a 7U enclosure. It is expected to be available in Q4 2010.

On the software side, the developer community seems to be as enamored with GPU acceleration as the OEMs. NVIDIA estimates there are currently about 100 thousand active NVIDIA GPU developers today, from a standing start in 2007. Much of this activity is directed at HPC codes. Whether it’s in astrophysics, molecular dynamics, bioinformatics, or climate modeling, the level of impact in those communities is continuing to increase. Developers in these areas, and others, are porting their existing CPU-based codes or doing ground-up application development specifically targeting GPU platforms.

In climate and weather modeling, in particular, there are a range of models that are being targeted or retargeted to GPU platforms via CUDA. They include such codes as the Weather Research and Forecasting (WRF) model being developed at NCAR and elsewhere; the ASUCA Weather Model developed by Tokyo Tech and the Japan Meteorological Agency; and the Non-hydrostatic Icosahedral Model (NIM) at the NOAA. There are also major efforts for tsunami simulations, CO2 modeling, and ocean circulation codes being conducted on GPU platforms.

The CUDA development tools have been the key enabler for the whole ecosystem. Thanks to NVIDIA’s early dominance in GPGPU, CUDA C/C++ has emerged as the most widely used GPU programming environment for developers. There’s even talk now of targeting CUDA to CPUs, given that the language is inherently suited to multicore and manycore architectures. “To some extent, CUDA is becoming the most widely used parallel programming model,” said Sumit Gupta, senior product manager with the NVIDIA’s Tesla GPU Computing Group. “So if a university wants to teach parallel programming, they often end up doing GPU programming.”

Today, there are a number of attempts to create CPU ports of CUDA. There are two academic projects: one out of the University of Illinois, Urbana-Champaign called MCUDA, and another out of Georgia Tech called Ocelot. Now The Portland Group (aka PGI), has stepped up with a commercial CUDA CPU compiler. At GTC this week, PGI announced its intentions to offer a CUDA C for x86 development platform, which it hopes to demonstrate at SC10 in November.

If successful, developers will be able to write CUDA applications that can be run on either GPUs or CPUs. This, of course, was the whole idea behind OpenCL, the open standard language for multicore/manycore architectures. But since NVIDIA publishes the CUDA APIs, for all practical purposes it too is an open standard. Anyone — including AMD, by the way — could create a CUDA port for any processor with parallel hardware features. NVIDIA officially maintains it is agnostic regarding what people use to program their hardware, but the company’s enthusiasm for its home-grown CUDA software is abundantly clear.

CPU support aside, the GPGPU ISV community continues to gain momentum, as is evident if you peruse the exhibit hall and session list at GTC. Besides scientific computing, the technology has also expanded into business intelligence (Jedox Palo, Empulse Parstream and Milabra Display Ads), factory automation (Dalsa and MvTech), electronic design automation (Rocketick and Agilent), and ray tracing/rendering (Autodesk 3ds Max, Bunkspeed and Lightworks).

On Tuesday, ANSYS announced it had implemented GPU acceleration for its ANSYS Mechanical product, a widely used software package used for industrial designs. Using the GPU, they have realized a 2X speedup compared to its CPU-only implementation. That’s a fairly modest gain compared to 10x to 500X speedups some people claim for more science-heavy codes. But for industrial design, cutting simulation times in half is a big deal.

NVIDIA, itself, is using Agilent software for chip design, running the app on a small in-house GPU cluster. The company is also evaluating the GPU-accelerated Rocketick chip verification tool. Early results look promising according to NVIDIA’s Gupta. “We also use ANSYS Mechanical for our designs, and we’ll definitely use the GPU version of that,” he said, “So we’re eating our own dog food.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Scientists Conduct First Quantum Simulation of Atomic Nucleus

May 23, 2018

OAK RIDGE, Tenn., May 23, 2018—Scientists at the Department of Energy’s Oak Ridge National Laboratory are the first to successfully simulate an atomic nucleus using a quantum computer. The results, published in Ph Read more…

By Rachel Harken, ORNL

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Francisco, one would be tempted to dismiss its claims of inventing Read more…

By John Russell

Intel, Micro Debut Quad-Level Cell NAND Flash

May 22, 2018

Chipmakers continue to gear designs toward AI and other demanding cloud workloads that take advantage of datacenter flash storage capacity. To that end, memory specialist Micron Technology Inc. began shipping compact sol Read more…

By George Leopold

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combined peak computing capacity, the new systems will extend the a Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combine Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

PRACE 2017 Annual Report: Exascale Aspirations; Industry Collaboration; HPC Training

May 15, 2018

The Partnership for Advanced Computing in Europe (PRACE) today released its annual report showcasing 2017 activities and providing a glimpse into thinking about Read more…

By John Russell

US Forms AI Brain Trust

May 11, 2018

Amid calls for a U.S. strategy for promoting AI development, the Trump administration is forming a senior-level panel to help coordinate government and industry research efforts. The Select Committee on Artificial Intelligence was announced Thursday (May 10) during a White House summit organized by the Office of Science and Technology Policy (OSTP). Read more…

By George Leopold

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Leading Solution Providers

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This