Visit additional Tabor Communication Publications
June 23, 2010
AMD today announced its next-generation FireStream GPU accelerator boards for HPC and other technical computing applications. The FireStream 9350 and 9370 represent the company's attempt to match the pace NVIDIA has set with its "Fermi" Tesla-20 GPU offerings launched this spring. The new PCIe boards from AMD are based on the ATI Radeon HD 5870 processor, which was launched last year for the consumer market.
AMD has kicked up the peak performance on the new top-end FireStream, more than doubling the peak floating point performance of AMD's current generation 9270 product. The dual-slot 9370 device boasts 2.64 teraflops of single precision performance and 528 gigaflops of double precision performance. From a raw FLOPS standpoint, this matches up well with NVIDIA's latest "Fermi" parts, which deliver 1.03 single precision teraflops and 515 gigaflops double precision. The 9370 comes with 4 GB of local GDDR5 memory, while Tesla-20 offers the option of 3 or 6 GB (in the M2050 and M2070, respectively). Both top-end AMD and NVIDIA GPU computing devices max out at 225 watts.
The lesser 9350 offering is a single-slot board that delivers 2.0 teraflops of single precision and 400 gigaflops of double, and memory on this product is just 2 GB. Since the power consumption on the 9350 is a very modest 150 watts, this would be the clear choice if you wanted to max out density and performance per watt and didn't require more than a couple of gig of local memory.
Setting aside the impressive performance and power metrics, the newest FireStream products are still missing some key features for HPC apps. Not the least of which is error corrected memory (ECC), which NVIDIA has incorporated in its latest generation Tesla-20 products. Dropping a few bits here and there is fine for traditional graphics processing, and even some technical computing codes, but for many HPC applications, ECC is a hard requirement.
NVIDIA also added L1 and L2 caches to its Fermi architecture, as well as support for asynchronous data transfers, concurrent kernels, and C++. That makes the NVIDIA GPUs more CPU-like than its predecessors, and at least theoretically, more friendly to traditional parallel programming techniques. AMD has stuck with a more traditional graphics processor architecture, which means more silicon real estate is devoted to additional GPU cores, rather than general-purpose computing functionality.
However, AMD's more conservative strategy could make the FireStream products a prime choice in some application domains. For example, rendering server farms that harness raw GPU power to turn bits into image streams are very well-suited to traditional graphics processors. In fact, most scaled-out visual computing apps could take advantage of the high FLOPS performance on the FireStream, and not miss ECC at all. As the Web becomes more immersed in high-def 3D visualization, demand for GPU-accelerated servers could take off.
OTOY, a company that provides visualization solutions for the entertainment industry, is a big AMD fan and thinks the FireStream technology is a good fit for its emerging cloud rendering business. The FireStream press release quotes OTOY founder Jules Urbach, who says: "Rendering 3D video game graphics by way of a cloud model makes sense on so many levels, completely removing the issue of platform compatibility by potentially turning every device that can access the cloud into a gaming client. The rendering power of AMD GPU compute accelerators is phenomenal, and I believe it delivers the compute density and performance needed to make the OTOY business model a success."
The other big difference between the AMD and NVIDIA GPU accelerator offerings is the underlying software framework. After a fling with the Brook+ GPU computing language, AMD is now focused on pushing the OpenCL open standard (and DirectX) for all its GPU computing offerings. OpenCL seems like a particularly nice fit for AMD, since it can be applied to both CPU and GPU parallel computing codes, thus will dovetail nicely with the company's heterogeneous Fusion processors.
NVIDIA also supports OpenCL, and publicly maintains an agnostic position with regard to software environments. But the company's GPU computing platforms have mostly relied on its home-grown CUDA programming environment, which got a two and half year jump on the OpenCL standard. As a result, the CUDA tools and compiler have managed to attract an extensive ecosystem of partners. High-level language implementations, debuggers, libraries, integrated development environments, university courses, and consulting services have grown up around CUDA over the last four years. Whether OpenCL becomes a force in HPC application development remains an open question, but NVIDIA could live with either a CUDA-based or OpenCL-based ecosystem, while AMD has bet almost everything on the open standard.
A number of OEMs have already announced plans to incorporate the new FireStreams in their workstations and servers. For the time being, these include Supermicro, AMAX and One Stop Systems, but more system vendors are expected to add the AMD GPU option in the near future. The lower power, single-slot 9350 may be especially attractive for smaller systems with only limited power envelopes and PCIe interfaces. The new FireStream boards are slated for general availability in Q3 2010, at which point systems will begin shipping.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.