The Weekly Top Five

By Tiffany Trader

February 17, 2011

The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover Watson’s university friends, RWTH Aachen University’s new Bull supercomputer, the University of Florida’s reconfigurable supercomputer, NICS Puppet installation, and Web-style visualizations.

Eight Universities Contribute to Watson’s Smarts

“It takes a village” is a popular quote, but in order to develop the advanced level of natural language processing demonstrated by IBM’s Watson supercomputer, it really does require the participation of the greater research community. So it’s only natural that eight major universities were working alongside IBM researchers to cultivate the Question Answering (QA) technology behind the “Watson” computing system. The group’s efforts were rewarded this week when Watson proved its mettle against human champions, winning the Jeopardy! exhibition match handily.

The list of collaborators includes Massachusetts Institute of Technology (MIT), University of Texas at Austin, University of Southern California (USC), Rensselaer Polytechnic Institute (RPI), University at Albany (UAlbany), University of Trento (Italy), University of Massachusetts Amherst, and Carnegie Mellon University.

Dr. David Ferrucci, leader of the IBM Watson project team, commented on the partnership:

“We are glad to be collaborating with such distinguished universities and experts in their respective fields who can contribute to the advancement of QA technologies that are the backbone of the IBM Watson system. The success of the Jeopardy! challenge will break barriers associated with computing technology’s ability to process and understand human language, and will have profound effects on science, technology and business.”

The official announcement provides a summary of each group’s accomplishments.

RWTH Aachen University Hearts Bull

On Valentine’s Day, the North Rhine-Westphalia Technical (RWTH) University showed its love for Bull when it placed an order for one of the company’s bullx supercomputers. RWTH University in Aachen will use the additional computing power to facilitate scientific advances in variety of fields, including engineering, physical sciences, chemistry, biology, mathematics and computer science.

The 300-teraflop system features over 28,000 Intel cores and three petabytes of disk storage. It was designed as a two-part system to facillitate parallelization. According to the release, the massively parallel section (MPI) includes 1,350 nodes with a total of 16,200 cores, while the SMP (symmetrical multiprocessing) section includes 11,456 cores, grouped into 181 supernodes. Each supernode is equipped with 64 cores with high-capacity shared memory. These nodes are in turn grouped into a large-scale cluster that can be programmed along with the MPI.

This level of computing power is necessary if scientists are to enact realistic simulations. Professor Christian Bischof, director of the Center for Computing and Communication and holder of the chair in Scientific Computing at RWTH Aachen University, expounds on the many benefits to science and technology, which include “understanding natural phenomena more accurately, discovering new raw materials or developing new technical processes.”

The project partners have also made a committment to “Green IT” and will be working to optimize the efficiency of supercomputer processing. The softare-based approach will enable each operation to use less energy without adversely affecting performance. Considering a typical system consumes almost a megawatt of power, or about 200 households worth, there’s an environmental and economic incentive. It’s no surprise that increasing energy-efficiency has the added bonus of reducing operating costs.

If all goes according to schedule, the system will be delivered next month and will be up and running in May.

University of Florida Leads Pack in Reconfigurable Computing

The University of Florida is proclaiming itself as a leader in reconfigurable supercomputering. At the center of the claim is the university’s Novo-G supercomputer, the world’s fastest according to university officials. Although it relies on a different chip design, Novo-G can process certain applications faster than the Chinese Tianhe-1A system touted as world’s fastest, according the the most recent TOP500 list.

The TOP500 list does not include systems like Novo-G, which rely on the power of Field-programmable Gate Arrays (FPGAs) instead of so-called fixed-logic hardware structures like the more common CPU.

Reconfigurable machines, which rely on adaptive hardware customizations, are a fairly new innovation. FPGAs adapt to match the unique needs of each application, leading to increased speed and reduced energy requirements.

Alan George, professor of electrical and computer engineering, and director of the National Science Foundation’s Center for High-Performance Reconfigurable Computing, known as CHREC, explains that “it is very difficult to accurately rank supercomputers because it depends upon what you want them to do.”

Powered by 192 reconfigurable processors, Novo-G tackles a host of applications well-suited to the machine’s unique design. Scientists use the system to bolster research in fields such as health and life sciences, signal and image processing, and financial science.

A planned upgrade, scheduled for later this year, will double the reconfigurable capacity of Novo-G. University officials note that the upgrade requires “a modest increase in size, power, and cooling, unlike upgrades with conventional supercomputers.”

Puppet Pulls Strings on NICS Infrastructure

The National Institute for Computational Science (NICS) relies on Puppet to manage its many systems, including Kraken, the first academic petaflop supercomputer and the eighth top-rated system in the world. With Puppet, NICS can ensure the performance and security of its high-end computing resources.

Kraken, NICS flagship Cray XT5 system, contains 112,896 compute cores, 129 terabytes of memory, and 3.3 petabytes of raw disk space. The 1.7 petaflop supercomputer is accessed by 2,000 active researchers and contributes more than 700 million CPU hours per year to the TeraGrid.

Puppet gives NICS administators centralized control of their resources, which lets them apply system changes consistently to uphold security measures. Puppet has also significantly reduced server deployment times. Before, administrators had to maintain each server individually, a time-consuming process. With Puppet, what used to be a four to six hour job now takes just an hour. The saved time can be devoted to more important tasks, like maintaining an efficient infrastructure and staying abreast of updates and advances in technology.

Stephen McNally, HPC administrator with NICS, expressed satisfaction with the management system. “Twelve months ago we had no standard for managing our infrastructure; Puppet is now the standard. Our machines don’t go up until they’re in Puppet, tested, and working,” he said.

Web-Style Visualizations Promise More Meaningful Data

Rensselaer Polytechnic Institute Web experts Peter Fox and James Hendler are asking scientists to take a page from the Web when presenting their data. The two professors have written a perspective piece titled “Changing the Equation on Scientific Data Visualization” in which they recommend a new strategy for scientific visualizations, one that relies on the World Wide Web for inspiration.

That visualizations help unlock the mysteries of complex data is not being disputed, but Fox and Hendler believe they could be used more effectively.

The problem with the current use of visualization in the scientific community, according to [the duo], is that when visualizations are actually included by scientists, they are often an end product of research used to simply illustrate the results and are inconsistently incorporated into the entire scientific process. Their visualizations are also static and cannot be easily updated or modified when new information arises.

The Web provides a wealth of easy-to-use visualizations that scientists could use to add meaning to the data throughout the research process. Also these Web-based tools tend to be inexpensive, simple to use and easy to modify. For example, as new information comes in, the scenarios can be updated, which is often difficult when using more complex design tools.

According to the university announcement, “[s]imple Web-based visualization tool kits allow users to easily create maps, charts, graphs, word clouds, and other custom visualizations at little to no cost and with a few clicks of a mouse. In addition, Web links and RSS feeds allow visualizations on the Web to be updated with little to no involvement from the original developer of the visualization, greatly reducing the time and cost of the effort, but also keeping it dynamic.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This