AMD Fusion Developer Summit Fuels HPC Conversations

By Nicole Hemsoth

June 16, 2011

The AMD Fusion Developer Summit came to a close this afternoon following a three-day run in Bellevue, Washington. With over 700 developers in attendance and a surprisingly large selection of sessions with direct appeal to the HPC crowd, the event provided a broad range of use cases and academic arguments supporting the idea that OpenCL (and of course, GPU computing) are set to play a role in the future of high performance computing and beyond.

While there were plenty of opportunities to explore the graphics eye candy and more general-purpose uses of the Fusion APU, most of this reporter’s time was spent delving into sub-topics in the HPC category, including crash courses on GPUs in the context of Hadoop, Mathematica, and of course, OpenCL for large enterprise and research.

Below are a few noteworthy video clips and other featured items collected during the event. If you haven’t seen it, check out the ARM keynote given on AMD stage—an interesting selection for a speaker, but he was able to pull together the connection in terms of the two companies’ dual emphasis on open standards, energy efficiency and of course, heterogeneous computing.

First, we’ll let AMD’s Margaret Lewis tell us a little bit about where OpenCL stands for the HPC community in contrast to CUDA. She says the openness is part of what should make OpenCL an attractive programming model for HPC shops, especially as they tend to use a range of architectures. As she told us, “AMD sees the maturation of OpenCL as a capability not only to download to the GPU but to start utilizing the CPU and GPU as complementary computing engines. Really, that’s what heterogeneous computing is about.”

We also hit on the concept of the scale-out cluster and the role of manycore as well as take a broader look at the future of the upcoming Bulldozer architecture.

AMD Corporate Fellow, Charles Moore presented a short session on Fusion Processors for HPC, noting that while it was not a product announcement, we could expect to see Fusion APUs with very high performance single-precision and double-precision support in the future. Moore stepped back into history to look at events leading up to the heterogeneous compute era. He claims that for the first time, “the GPU is now a first-class citizen; at the same level as the CPU.” In addition to talking about the role of HPC in saving economies and boosting healthcare, education and other areas, he also spent a considerable amount of time discussing some of the challenges inherent to the pending exascale era. In Moore’s view, to reach exascale, AMD needs a 10x efficiency improvement, but he claims they are on a trajectory to intersect exascale requirements by 2018-2020.

Dylan Roeh and Abdul Daddak are kernel developers for Wolfram Research. Dakkak leads efforts to exploit GPU capabilities for Mathematic 8 and Roeh was one of the developers behind the recently-added OpenCL support in Mathematica. The two presented a session called “Heterogeneous Computing for Finance Using Mathematica and OpenCL” in which they discussed how the addition of GPU support in Mathematica has allowed for enhanced possibilities within the Mathematica language. They looked at the ways that OpenCL can be applied to pricing a variety of financial derivatives from inside Mathematica, focusing on the ease of use that is provided by the OpenCL/Mathematica combination and highlighting the performance advantages of GPU computing for these applications in general.

I caught up with Abdul and Dylan (who speaks first in the video below) about the value of GPU computing for finance and some of the challenges that hardware vendors unwittingly lay on developers.

Among some other notable items that were on the agenda at the AMD Fusion Developer Summit were sessions like the one presented by Jim Falgout, Chief Technologist for Pervasive DataRush. Falgout’s topic, “Leveraging Multicore Systems for Hadoop and HPC Workloads” demonstrated how developers can harness multicore servers and clusters, particularly Hadoop clusters, to tackle some of the problems hidden under mounds of big data. Falgout made the argument that even though developers are being faced with the promise of scaled-out hardware, many are still waiting far longer than they’d like for processing and building MapReduce jobs. He claims that the root of these problems is in the scattered software development that leaves cores sitting idle and non-performing, which means increases in energy costs and loss of potential productivity. While something of a deep-dive session into the specifics of Hadoop, it nonetheless was revealing to see how minor variations on the software side can add vast performance increases to Hadoop and HPC workloads that still scale.

While this was not an HPC-geared conference by any means, AMD made sure to stock the agenda full of sessions that reeled in some rather sizable crowds. As the OpenCL ecosystem matures over the next year it will be interesting to see how many more use cases of OpenCL and HPC there are and furthermore, if there appears to be any decline in similar uses of CUDA. Something says that shakedown between the two won’t occur anytime soon but as AMD touted all week, we cannot underestimate the value proposition of open architectures and open standards.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is enjoying a prosperity seen only every few decades, one driven Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, produ Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that d Read more…

By Doug Black

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competition. This is the twelfth time that teams of university undergr Read more…

By Dan Olds

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is en Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

At SC18: GM, Boeing, Deere, BP Talk Enterprise HPC Strategies

November 9, 2018

SC18 in Dallas (Nov.11-16) will feature an impressive series of sessions focused on the enterprise HPC deployments at some of the largest industrial companies: Read more…

By Doug Black

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This