AMD Refreshes Roadmap, Transitions Back to HPC

By Tiffany Trader

May 7, 2015

AMD revealed key elements of its multi-year strategy as part of its 2015 Financial Analyst Day event in New York on Wednesday. Out of the gate, CEO Lisa Su acknowledged the company’s recent challenges, pointing to a weak PC market and market share losses, before turning her attention to the game plan that AMD is counting on to turn its earnings statement from red to green. It’s a game plan that has AMD returning to the high-end server space as it seeks to diversify its revenue base and grow into new markets.

“We are focused on areas that require high performance compute, high performance graphics, visualization technologies, and complex system on chips,” said Su, who kicked off the proceedings, “those are the areas that are uniquely suited to AMD…and we think this represents about a $60+ billion TAM.”

AMD FAD 2015_3 year game plan slide

“Datacenter is probably the single biggest bet that we are making as a company,” she declared. “We have not been competitive the last few years, we will be competitive in the datacenter market.”

Su also spoke about the decision to exit the SeaMicro dense server system business line “for one because microservers were not growing as fast as originally thought and two we really aren’t a systems company, however on the silicon side, very very clearly we are an x86 company, we have tremendous x86 heritage and are absolutely going to invest in high-performance x86.”

With regard to technology, Su said that AMD portfolio decisions will be focused on high-performance cores, immersive technologies, 2.5/3D packaging and software/APIs. This will align with increased x86 investment, focused ARM investment and a simplified CPU roadmap.

Su also offered an appeasement for those wondering why AMD didn’t make these changes sooner.

“On the platform side, to those of you that ask what have you guys been doing for a couple of years. The truth is, it takes a while to really transform both the R&D capabilities, the technologies and the modularity,” she said.

And now…Getting Zen with simplified roadmaps

The highlight of AMD’s revamped technology roadmap is a brand new x86 processor core codenamed “Zen,” touting improved instructions per clock of up to 40 percent over “Excavator” cores. Absent from the lineup, however, is the Skybridge project. Announced last year, the plan to join x86 and ARM together on a common platform was dropped, according to Su, due to customer feedback indicating a desire for x86 and ARM, but not necessarily in socket-compatible factors.

AMD+FAD+2015_Zen

Mark Papermaster, AMD chief technology officer and senior vice president, addressed AMD’s x86 positioning and laid out some of Zen’s specs in anticipation of its 2016 debut. AMD is counting on its new Zen core to drive its re-entry into high-performance desktop and server markets and put it back on a competitive track against arch rival Intel.

“It’s got high-throughput, very efficient design, and a new cache and memory subsystem design to feed this core,” he said, referring to the feature of simultaneous multi-threading (SMT). The performance is the result of doubling down on the previous generation core, Excavator, due out this summer, said Papermaster.

“This wasn’t one silver bullet,” Papermaster continued, “but a number of elements combining to drive the microarchitecture improvement and deliver what I’ve not seen in the industry before, a 40 percent improvement in instructions per clock.”

It’s the core design for the workload of the future and it’s available next year, he added. It’s also a commitment to sustainable innovation, according to Papermaster, who says the company has leapfrogging design teams and is already working on the successor to Zen as it works to establish a family of cores over time.

Papermaster also revealed that AMD’s first custom 64-bit ARM core, “K12” core, is on track for a 2017 sampling. These enterprise-class ARM cores are designed for efficiency and are intended for server and embedded workloads.

AMD FAD 2015_cg roadmap detail

On the graphics side, AMD is readying to launch its high-performance graphics processing unit (GPU) with die stacked High Bandwidth Memory (HBM) using a 2.5D silicon interposer design. This core is optimized for graphics and parallel compute and includes a number of other enhancements (depicted in the slide below). AMD reported that future generations of its high-performance GPUs will be based on FinFET process technology, which will contribute to a doubling of performance-per-watt.

AMD+FAD+2015_Graphics+Leadership+slide

These three essential chip technologies will be the building blocks of AMD’s Enterprise, Embedded and Semi-Custom Business Group (EESC). A new group launched in 2014 as part of AMD’s business unit reorganization, EESC is focused on high-priority markets that will leverage high-performance CPU and GPU cores inside differentiated solutions.

Forrest Norrod, senior vice president and general manager of the business group, referred to the EESC segment as “a principal driver of growth for the last few years and one we think is central to the growth story of AMD going forward.”

Norrod added that these three businesses (enterprise, embedded and semi-custom) share a perspective around the best way to showcase AMD technology.

“In all of these businesses our customers are building products around the technology ingredients that we give to them and bringing differentiated solutions to the end customer that leverage AMD unique IP,” he stated.

“So we really think now of ESSC as being a continuum leveraging technology, customer relationships and the modular design approach at both the chip as well as the systems level.”

Norad went on to share in broad strokes AMD’s datacenter roadmap for the 2016-2017 timeframe, which includes its next-gen x86 Opteron, next-gen ARM, and an APU that we will be tracking closely.

AMD FAD 2015_eesc roadmap

The upcoming next-generation AMD Opteron processors are based on the x86 “Zen” core and target mainstream servers. These x86 Opterons tout high core count with full multi-threading, disruptive memory bandwidth and high native I/O capacity. Norrrod also introduced “the highest performance ARM server GPUs,” powered by AMD’s upcoming “K12” core.

Most relevant for HPC, though, is the new high-performance server APU, a multi-teraflops chip targeting HPC and workstation markets.

“We’re bringing the APU concept fully into the server realm,” Norrod stated. “These are high performance server APUs offering not just high-performance CPU cores and memory but multi-teraflops GPU-capability, providing a level of performance for machine learning, a level of performance for finite element analysis, and a level of performance for memory bandwidth for reverse time migration algorithms that the oil companies use to do reservoir simulation.”

The APU line (APU stands for accelerated processing unit) is an outcome of the Fusion project, which started in 2006 with AMD’s acquisition of the graphics chipset manufacturer ATI. AMD has talked up the potential benefits of tight CPU-GPU integration for HPC workloads in the past, but until now AMD’s APU efforts have primarily been relegated to desktop space.

In January 2012, AMD rebranded the Fusion platform as the Heterogeneous Systems Architecture (HSA) and has in recent months begun championing the CPU+accelerator architecture for a wide range of workloads, including HPC.

AMD says the next-gen server APU stands to deliver massive improvements to vector applications with scale-up graphics performance, HSA enablement, and optimized memory architecture. “We think we’ve got unique and compelling technology that is only possible by wedding together the CPU and world-class GPU and combining them with an open standard HSA software interface,” said Norrod.

What’s not clear at this point is whether the APU’s multi-teraflops will be of the half- single- or double-precision variety, and the workloads that Norrod lists are a mixed bag in that respect (FEA and machine learning, for example). Of course, there is no reason AMD can’t launch variants for each, but it would be hard to claim HPC cred without an FP64-heavy version.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire