AMD Searches for an HPC Strategy

By Michael Feldman

February 15, 2008

Both Intel and NVIDIA have laid out a pretty clear strategy on how they’re going to attack the HPC market over the next year or so. Intel will continue to push out their 45nm Xeons, then the Nehalem processors, then Larrabee. NVIDIA will introduce 64-bit floating point support into their Tesla GPU computing line and continue to refine the CUDA programming environment. AMD, however, which is still reeling from a disastrous 2007, has seemed less focused on the high end of the market than ever before. The company’s stated mission to regain profitability in 2008 means it will refocus its energies on the volume market — desktop, laptop, mobile and embedded computing.

With that as a backdrop, I thought this might be a good time to talk with David Rich, director of marketing for HPC at AMD, and get a sense of the company’s strategy for high-end computing over the next year or two.

One of the first things that AMD would like to rectify this year is its visibility in the HPC community — or lack thereof. Rich admitted that they’ve been getting questions from people in the community wondering if they really care about HPC anymore. Not a good thing — especially when Intel is making a big push with its high performance Xeon processors, experimenting with 80-core teraflop processors and on-chip lasers, and just generally dominating the high-end computing conversation. “We recognize that we have not been as visible as we should have been [in the past], so we’re going to make an effort to be present at more HPC-oriented events,” says Rich.

Rich says they intend to make an extra effort to reconnect with HPC users this year, especially at the big conferences like the International Supercomputing Conference (ISC’08) in Dresden and the Supercomputing Expo and Conference (SC08) in Austin, Texas. AMD actually lucked out this year. Two of the companies big fabs are in Dresden, and Austin is a major business operations site. AMD is likely to use the home court advantage to have a larger than average contingent at the two biggest HPC events of the year.

One thing AMD still has going for it is the good will it has built up in the high performance computing community over the past few years due to the superior attributes of its Opteron processor. No doubt some of that good will has been eroded due to missteps in 2007, especially the failed Barcelona launch that left HPC OEMs looking longingly at Intel parts. Overall though, since 2005, better memory performance and scalability allowed Opterons to shine in the HPC realm when compared to their Xeon counterparts. Since most supercomputers, especially the top 10 variety, get planned two to four years in advance, AMD will still be able to ride this momentum at least until the end of the decade.

With the exception of SGI, every major HPC system vendor uses AMD chips today. Most vendors offer both Intel- and AMD-based systems, although Cray is AMD-only, at least until 2010. And despite Sun Microsystem’s embrace of Intel in 2007, the largest machines, like the TSUBAME supercomputer in Japan and the 500 teraflop system just deployed at TACC, are Opteron-based.

As I’ve written about before, though, the Opteron’s HPC advantage is about a year away from disappearing. In truth, the latest 45nm Xeons with the souped up front-side bus are already faster than the current 65nm Opterons on a range of technical computing applications. With Intel’s upcoming Nehalem processor family, scheduled to start rolling out in late 2008/early 2009, the company will be adding integrated memory controllers and QuickPath, a HyperTransport-like point-to-point interconnect that will replace Intel’s antiquated front-side bus. At that point, you have to ask how AMD intends to compete at the high end.

In the short-term, AMD expects Nehalem to initially be delivered in the 2P flavor for dual-socket systems. In that configuration, Intel will match up very well against the 2P Opterons. If AMD is still a process generation behind its rival, as it is now, Intel will almost certainly have the performance edge. In 2009, AMD plans to implement HyperTransport 3.0 on its processors, which, according to Rich, will allow them to retain a memory bandwidth advantage, at least in 4-socket servers.

“Then we’ll be in a situation that we’re actually already in,” says Rich. “Some applications will perform better on Intel [processors] and some will perform better on ours. People are really going to have to look at their applications to see where they get better performance. It’s going to be a neck and neck race.”

Although most people point to the Opteron as the area where AMD lost the high ground this year, the company’s ATI-derived GPU computing products for HPC got blind-sided by NVIDIA when it rolled out the Tesla product line and the associated CUDA GPU programming environment. AMD’s 64-bit FireStream stream processor was announced in November 2007 and is expected to be go to the market sometime this year. Rich says the FireStream hardware is a very competitive product, but admits they have been behind on the software front. According to him, though, they’re catching up quickly. For high-level development, AMD has developed Brook+, a tool that provides C extensions for stream computing on GPU hardware, and which is based on the Brook project at Stanford. Rich notes there are similarities with NVIDIA’s CUDA environment, but stopped short of saying that Brook+ would be CUDA compatible. When announced at the end of last year, AMD said the FireStream product would launch in Q1 of 2008, which is the same time frame targeted by NVIDIA for its 64-bit Tesla offering.

Going head-to-head against Intel and NVIDIA with CPU and GPU offerings, respectively, is a conservative strategy and maybe a problematic one, considering the economies of scale in play in the chip design and manufacturing business. But because AMD now owns both kinds of architectures, it should be able to use that advantage against its much larger rivals. That was certainly part of the rationale behind the ATI-AMD merger in 2006. If they ever intend to extract some synergy out of the CPU-GPU, now would be a good time to take the pole position. AMD’s CPU-GPU Fusion hybrid processor, now referred to as an accelerated processing unit (APU), is due out in the second half of 2009. But by this date, Intel may have its own version of a CPU-GPU processor.

Layered on top of the CPU, GPU, and APU product set is AMD’s Accelerated Computing strategy, which is intended to create a software stack for a heterogeneous computing environment. This will include elements such as drivers, APIs, compilers, and OS support. According to Rich, a lot of work has been going into this behind the scenes and AMD should be ready to elaborate on the strategy this spring, but it’s clear they intend to build on top of Torrenza, the first phase of AMD’s accelerated computing platform.

The big picture with accelerated computing is to create a system environment where different species of processors (CPUs, GPUs, FPGAs, and custom ASICs) can be brought together to provide a rich set of computational engines for application software. The acceleration effect is the result of mapping the different software components onto the most appropriate hardware. The software stack running on top of the hardware will provide a standard and, presumably, high-level way to access the underlying processors. More than anything, this sounds like a mainstream version of Cray’s Adaptive Computing vision, the supercomputer maker’s strategy to take HPC to the next level.

If AMD manages to deliver this new paradigm to the mass market, the company will have once again succeeded in making an end-run around its larger rivals. It wouldn’t be the first time.

—–

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at editor@hpcwire.com.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This