Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
February 17, 2011

The Weekly Top Five

Tiffany Trader

The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover Watson’s university friends, RWTH Aachen University’s new Bull supercomputer, the University of Florida’s reconfigurable supercomputer, NICS Puppet installation, and Web-style visualizations.

Eight Universities Contribute to Watson’s Smarts

“It takes a village” is a popular quote, but in order to develop the advanced level of natural language processing demonstrated by IBM’s Watson supercomputer, it really does require the participation of the greater research community. So it’s only natural that eight major universities were working alongside IBM researchers to cultivate the Question Answering (QA) technology behind the “Watson” computing system. The group’s efforts were rewarded this week when Watson proved its mettle against human champions, winning the Jeopardy! exhibition match handily.

The list of collaborators includes Massachusetts Institute of Technology (MIT), University of Texas at Austin, University of Southern California (USC), Rensselaer Polytechnic Institute (RPI), University at Albany (UAlbany), University of Trento (Italy), University of Massachusetts Amherst, and Carnegie Mellon University.

Dr. David Ferrucci, leader of the IBM Watson project team, commented on the partnership:

“We are glad to be collaborating with such distinguished universities and experts in their respective fields who can contribute to the advancement of QA technologies that are the backbone of the IBM Watson system. The success of the Jeopardy! challenge will break barriers associated with computing technology’s ability to process and understand human language, and will have profound effects on science, technology and business.”

The official announcement provides a summary of each group’s accomplishments.

RWTH Aachen University Hearts Bull

On Valentine’s Day, the North Rhine-Westphalia Technical (RWTH) University showed its love for Bull when it placed an order for one of the company’s bullx supercomputers. RWTH University in Aachen will use the additional computing power to facilitate scientific advances in variety of fields, including engineering, physical sciences, chemistry, biology, mathematics and computer science.

The 300-teraflop system features over 28,000 Intel cores and three petabytes of disk storage. It was designed as a two-part system to facillitate parallelization. According to the release, the massively parallel section (MPI) includes 1,350 nodes with a total of 16,200 cores, while the SMP (symmetrical multiprocessing) section includes 11,456 cores, grouped into 181 supernodes. Each supernode is equipped with 64 cores with high-capacity shared memory. These nodes are in turn grouped into a large-scale cluster that can be programmed along with the MPI.

This level of computing power is necessary if scientists are to enact realistic simulations. Professor Christian Bischof, director of the Center for Computing and Communication and holder of the chair in Scientific Computing at RWTH Aachen University, expounds on the many benefits to science and technology, which include “understanding natural phenomena more accurately, discovering new raw materials or developing new technical processes.”

The project partners have also made a committment to “Green IT” and will be working to optimize the efficiency of supercomputer processing. The softare-based approach will enable each operation to use less energy without adversely affecting performance. Considering a typical system consumes almost a megawatt of power, or about 200 households worth, there’s an environmental and economic incentive. It’s no surprise that increasing energy-efficiency has the added bonus of reducing operating costs.

If all goes according to schedule, the system will be delivered next month and will be up and running in May.

University of Florida Leads Pack in Reconfigurable Computing

The University of Florida is proclaiming itself as a leader in reconfigurable supercomputering. At the center of the claim is the university’s Novo-G supercomputer, the world’s fastest according to university officials. Although it relies on a different chip design, Novo-G can process certain applications faster than the Chinese Tianhe-1A system touted as world’s fastest, according the the most recent TOP500 list.

The TOP500 list does not include systems like Novo-G, which rely on the power of Field-programmable Gate Arrays (FPGAs) instead of so-called fixed-logic hardware structures like the more common CPU.

Reconfigurable machines, which rely on adaptive hardware customizations, are a fairly new innovation. FPGAs adapt to match the unique needs of each application, leading to increased speed and reduced energy requirements.

Alan George, professor of electrical and computer engineering, and director of the National Science Foundation’s Center for High-Performance Reconfigurable Computing, known as CHREC, explains that “it is very difficult to accurately rank supercomputers because it depends upon what you want them to do.”

Powered by 192 reconfigurable processors, Novo-G tackles a host of applications well-suited to the machine’s unique design. Scientists use the system to bolster research in fields such as health and life sciences, signal and image processing, and financial science.

A planned upgrade, scheduled for later this year, will double the reconfigurable capacity of Novo-G. University officials note that the upgrade requires “a modest increase in size, power, and cooling, unlike upgrades with conventional supercomputers.”

Puppet Pulls Strings on NICS Infrastructure

The National Institute for Computational Science (NICS) relies on Puppet to manage its many systems, including Kraken, the first academic petaflop supercomputer and the eighth top-rated system in the world. With Puppet, NICS can ensure the performance and security of its high-end computing resources.

Kraken, NICS flagship Cray XT5 system, contains 112,896 compute cores, 129 terabytes of memory, and 3.3 petabytes of raw disk space. The 1.7 petaflop supercomputer is accessed by 2,000 active researchers and contributes more than 700 million CPU hours per year to the TeraGrid.

Puppet gives NICS administators centralized control of their resources, which lets them apply system changes consistently to uphold security measures. Puppet has also significantly reduced server deployment times. Before, administrators had to maintain each server individually, a time-consuming process. With Puppet, what used to be a four to six hour job now takes just an hour. The saved time can be devoted to more important tasks, like maintaining an efficient infrastructure and staying abreast of updates and advances in technology.

Stephen McNally, HPC administrator with NICS, expressed satisfaction with the management system. “Twelve months ago we had no standard for managing our infrastructure; Puppet is now the standard. Our machines don’t go up until they’re in Puppet, tested, and working,” he said.

Web-Style Visualizations Promise More Meaningful Data

Rensselaer Polytechnic Institute Web experts Peter Fox and James Hendler are asking scientists to take a page from the Web when presenting their data. The two professors have written a perspective piece titled “Changing the Equation on Scientific Data Visualization” in which they recommend a new strategy for scientific visualizations, one that relies on the World Wide Web for inspiration.

That visualizations help unlock the mysteries of complex data is not being disputed, but Fox and Hendler believe they could be used more effectively.

The problem with the current use of visualization in the scientific community, according to [the duo], is that when visualizations are actually included by scientists, they are often an end product of research used to simply illustrate the results and are inconsistently incorporated into the entire scientific process. Their visualizations are also static and cannot be easily updated or modified when new information arises.

The Web provides a wealth of easy-to-use visualizations that scientists could use to add meaning to the data throughout the research process. Also these Web-based tools tend to be inexpensive, simple to use and easy to modify. For example, as new information comes in, the scenarios can be updated, which is often difficult when using more complex design tools.

According to the university announcement, “[s]imple Web-based visualization tool kits allow users to easily create maps, charts, graphs, word clouds, and other custom visualizations at little to no cost and with a few clicks of a mouse. In addition, Web links and RSS feeds allow visualizations on the Web to be updated with little to no involvement from the original developer of the visualization, greatly reducing the time and cost of the effort, but also keeping it dynamic.”

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video