Visit additional Tabor Communication Publications
October 06, 2006
Last Friday's announcement of ATI's intent to build a "stream processing ecosystem" was the last piece in the why-AMD-bought-ATI puzzle. Though AMD's initial plans for ATI's graphic processing units (GPUs) may be for the desktop/laptop segment, the company appears to view the GPU as a fundamental technology across all of their markets: desktop, mobile, enterprise server and high performance computing. For all platforms, the strategy is to use the GPU to do what the CPU cannot -- data-level parallelism (DLP).
Having some of the same characteristics of proprietary vector processors, graphics engines can process data arrays much more efficiently than a standard microprocessor. Using GPUs, DLP-friendly workloads can achieve performance boosts on the order of 10X to 50X when compared to a CPU. Applications that can take advantage of this type of parallelism includes seismic modeling, financial risk assessment, protein folding, climate modeling, "physics processing" and image/speech recognition -- virtually any high performance computing workload.
What has pushed GPUs into the limelight at this point in time? Many (certainly AMD) believe that the graphic processing unit has finally grown up. The hardware has become both more powerful and general-purpose (supporting both MIMD and SIMD pipelines); the IEEE floating-point precision has been enhanced; and high-level languages that target GPUs are starting to emerge (Cg: C for Graphics), allowing for easier programming. In addition, the widening scope of data-intensive applications has created an opening for more data-centric architectures.
In a 2004 paper, titled GPU Cluster for High Performance Computing, the authors state:
"Driven by the game industry, GPU performance has approximately doubled every 6 months since the mid-1990s, which is much faster than the growth rate of CPU performance that doubles every 18 months on average (Moore's law), and this trend is expected to continue. This is made possible by the explicit parallelism exposed in the graphics hardware. As the semiconductor fabrication technology advances, GPUs can use additional transistors much more efficiently for computation than CPUs by increasing the number of pipelines."
At last Friday's announcement, ATI CEO David Orton, without revealing a specific roadmap, suggested that their graphics engine architecture would be further enhanced to benefit both traditional graphics workloads and general stream processing. Their current GPUs achieve about a third of a teraflop; the next generation is expected to reach a half a teraflop.
High energy consumption is a drawback. The ATI X1900 XT, the device currently being used for some stream processing demos, tops out at over 100 watts. That's not a big problem for desktops or non-mobile game machines, but if you want to deploy hundreds or thousands of them in a supercomputer, that level of power usage is sure to be a concern.
In the near term, ATI plans to use coherent HyperTransport so that its chips can take advantage of the AMD's native interconnect. In a couple of years (at 45nm process technology), ATI GPUs may end up on the same die as AMD CPUs, perhaps creating a Cell-processor-like device -- but with the advantage of a commodity software base. AMD has hinted that it might eventually make sense to transfer some silicon between the CPU and the GPU to optimize each unit's functionality; jettisoning the SIMD 3DNow! instructions on the AMD processors comes to mind.
If GPUs are destined to achieve parity with CPUs, it will be interesting to see what happens with Nvidia and Intel. Being late to the GPU party could have devastating effects for the procrastinators, since building a software base for your graphics engine will be critical in establishing product momentum. So far Intel has not made a move, but as I write this, rumors of Intel acquiring Nvidia are circulating around the Web. Stay tuned ...
Ten Petaflops or Bust
This week's feature article comes from a Herbert Wenk, a new contributing author for HPCwire. During a recent scientific conference at NEC's research facility in Germany, Wenk was able to gather information on Japan's plans for a ten petaflop system. Dr. Mitsuyasu Hanamura, who heads the applications software group within the RIKEN Next-Generation Supercomputer R&D Center, took part in a press briefing organized by the NEC Europe Computing & Communication Research Lab in St. Augustin, Germany. According to Wenk, Dr. Hanamura believes a heterogeneous architecture can meet Japan's ten petaflop goal by the end of 2011.
The developing controversy between interconnect models -- RDMA versus Send/Receive -- is being played out here at HPCwire. The original RDMA critique from Patrick Geoffray at Myricom, generated a rebuttal by Renato Recio, chief engineer at IBM eSystem Networks.
This week, Gilad Shainer at Mellanox Technologies weighs in with the view that you don't have to exclusively choose between RDMA or Socket Send/Receive; you can use either one depending on what's best for your application. See his "Why Compromise?" article in this week's issue.
Christopher Aycock at Oxford University, counters that the main trouble with that whole RDMA model of communication is the memory registration requirements, which really drags the performance of most applications on commodity networks like InfiniBand. Read the "Why Pretend?" counter-rebuttal to get his take on the problems with RDMA.
Gelato ICE Anyone?
Linux-on-Itanium enthusiasts got a dose of their favorite platform this past week in Singapore at the Gelato Itanium Conference and Expo. Before heading off to the equatorial event, we got the chance to interview a couple of the conference presenters: SGI's Steve Neuner and Intel's Cameron McNairy. In Neuner's interview, we ask how Linux was modified to enable a single system image to run on 1024 processors on an SGI Altix machine (in the lab, they're now up to 1742 processors). In our other interview, McNairy, who is the principal engineer and an Intel architect for the Montecito program, talks about Itanium's role in high performance computing. The money quote from his interview:
"Hardware is certainly easier to change than software..."
Oh how conventional wisdom does change!
Whew! I think that's everything on my list this week. If I missed something, I'll just say I'm following John West's suggestion. In this week's High Performance Careers column, he actually makes a case for NOT doing everything on your list. Great advice for the over-burdened technology worker ... or editor.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - October 05, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.