Visit additional Tabor Communication Publications
September 23, 2010
If there's one take away from this week's NVIDIA GPU Technology Conference (GTC), it's that GPU computing has grown up. Having been to last year's event, it's amazing to see how many more academic researchers and companies are taking the technology seriously in 2010. The exhibition hall was twice the size of GTC in 2009, enough to accommodate the 100 or so vendors plying their GPGPU wares. As NVIDIA CEO Jen-Hsun Huang said in Thursday morning's fireside chat session, "This is the year when applications developed on GPU computing go into production."
There was so much activity centered on technical computing at this year's event that at times it seemed like a CPU-less version of November's Supercomputing Conference. That was also reflected in the exhibitor list, which included HPC stalwarts like IBM, HP, SGI, Dell, Appro, Supermicro, Microsoft, The Portland Group, Platform Computing, Mellanox, T-Platforms and at least a dozen others.
Application areas like seismic exploration, weather modeling, computer vision, and medical imaging are latching onto this technology quickly. Just slightly further behind are domains like biomolecular modeling, which appears to be ripe for the GPU. The Wednesday keynote by Dr. Klaus Schulten, a computational chemist at University of Illinois, Urbana-Champaign, highlighted some early benefits in this area.
Schulten and his team at UI have started applying GPU acceleration to a range of molecular simulations. In his work, Schulten is employing GPGPU technology to develop the concept of a "computational microscope," which is designed for nanoscale examination of biomolecules and cells. This virtual microscope consists of basic chemistry and physics algorithms, NAMD software (which will soon offer a GPU port), and supercomputing hardware.
One application that Schulten talked about was modeling the flu drug Tamiflu to determine how the H1N1 ("swine flu") virus developed resistance to it. He's also using the technology to study such phenomenon as virus infections, how proteins are synthesized, the mechanism of photosynthesis, epigenetics, and quantum chemistry. Some of the work is being accomplished on GPU workstations, but the larger models use NCSA's Lincoln supercomputer, a heterogeneous cluster constructed from Dell PowerEdge servers and S1070 Tesla servers. Speedups on applications varied, the best being the quantum chemistry application. In that case, a simulation run that took a day with a CPU, took just a minute on the GPU platform.
There were a couple of sessions on the military applications of GPU computing, which looks to be a lucrative area for this technology. One presentation, hosted by EM Photonics, illustrated how GPGPU technology is being employed to accelerate compute-intensive applications in this domain. For example, an advanced image processing application was able to enhance long-distance photographs blurred by atmospheric distortion. GPU acceleration made it possible to perform this digital enhancement in real-time, opening up new applications for warfare and security operations. Other apps include electromagnetics simulations and CFD -- the latter being used to simulate aircraft landings on carriers. Depending on the military scenario, the GPU platform could be a desktop machine, an embedded system, or a cluster.
Other GPU computing applications that got some exposure at GTC this year are business intelligence, complex event processing, and speech recognition -- three areas that up until now would not have been associated with graphics processors. And of course there were a plethora of esoteric research applications, for example, Using GPUs for Real-Time Brain-Computer Interfaces -- something that would have come in handy at GTC this week, give the overload of sessions, posters, exhibits, and after-hours partying.
This also looks to be a breakout year for ISV support of GPGPU in HPC. At the event, ANSYS announced it would be incorporating GPU acceleration into its popular engineering modeling and analysis solution, ANSYS Mechanical. That product is slated for release later in the year. And although SIMULIA and Livermore Software Technology Corp. (LSTC) made no formal announcements this week, two GTC presentations on Thursday suggest they also will be bringing out GPGPU-support for their flagship products (Abaqus FEA and LS-Dyna, respectively) within the next few months.
Even though GTC was more about developers and applications, there were a few sessions highlighting some of the larger GPU supercomputers deployed, or about to be deployed. In this latter category is TSUBAME 2.0, Tokyo Tech's next-generation 2.4 petaflop super, which will be stuffed to the gills with 4,244 Tesla M2050 GPUs. In Tuesday's presentation by Satoshi Matsuoka, he spotlighted some of the cutting-edge apps that will be running on the new machine. This includes ASUCA, Japan's next-generation weather forecasting code that has been completely ported to the GPU (and reportedly took a year to do so). The result is that they will have a weather modeling application that is faster than real-time and works at resolutions of 0.5 km. According to Matsuoka, TSUBAME 2.0 is installed and undergoing stress tests, and will be formally announced in early October -- so expect more coverage to follow.
If 2.4 petaflop supers don't impress you, you'll just have to wait a bit. Thanks to a brief peek at NVIDIA's roadmap on Tuesday, the next generation of NVIDIA GPUs, Kepler, is slated to arrive in 2011. As Jen-Hsun Huang noted, "GPU computing is just starting. It's nothing compared to what you're going to have in a couple of years."
Posted by Michael Feldman - September 23, 2010 @ 7:25 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.