The Weekly Top Five

By Tiffany Trader

February 17, 2011

The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover Watson’s university friends, RWTH Aachen University’s new Bull supercomputer, the University of Florida’s reconfigurable supercomputer, NICS Puppet installation, and Web-style visualizations.

Eight Universities Contribute to Watson’s Smarts

“It takes a village” is a popular quote, but in order to develop the advanced level of natural language processing demonstrated by IBM’s Watson supercomputer, it really does require the participation of the greater research community. So it’s only natural that eight major universities were working alongside IBM researchers to cultivate the Question Answering (QA) technology behind the “Watson” computing system. The group’s efforts were rewarded this week when Watson proved its mettle against human champions, winning the Jeopardy! exhibition match handily.

The list of collaborators includes Massachusetts Institute of Technology (MIT), University of Texas at Austin, University of Southern California (USC), Rensselaer Polytechnic Institute (RPI), University at Albany (UAlbany), University of Trento (Italy), University of Massachusetts Amherst, and Carnegie Mellon University.

Dr. David Ferrucci, leader of the IBM Watson project team, commented on the partnership:

“We are glad to be collaborating with such distinguished universities and experts in their respective fields who can contribute to the advancement of QA technologies that are the backbone of the IBM Watson system. The success of the Jeopardy! challenge will break barriers associated with computing technology’s ability to process and understand human language, and will have profound effects on science, technology and business.”

The official announcement provides a summary of each group’s accomplishments.

RWTH Aachen University Hearts Bull

On Valentine’s Day, the North Rhine-Westphalia Technical (RWTH) University showed its love for Bull when it placed an order for one of the company’s bullx supercomputers. RWTH University in Aachen will use the additional computing power to facilitate scientific advances in variety of fields, including engineering, physical sciences, chemistry, biology, mathematics and computer science.

The 300-teraflop system features over 28,000 Intel cores and three petabytes of disk storage. It was designed as a two-part system to facillitate parallelization. According to the release, the massively parallel section (MPI) includes 1,350 nodes with a total of 16,200 cores, while the SMP (symmetrical multiprocessing) section includes 11,456 cores, grouped into 181 supernodes. Each supernode is equipped with 64 cores with high-capacity shared memory. These nodes are in turn grouped into a large-scale cluster that can be programmed along with the MPI.

This level of computing power is necessary if scientists are to enact realistic simulations. Professor Christian Bischof, director of the Center for Computing and Communication and holder of the chair in Scientific Computing at RWTH Aachen University, expounds on the many benefits to science and technology, which include “understanding natural phenomena more accurately, discovering new raw materials or developing new technical processes.”

The project partners have also made a committment to “Green IT” and will be working to optimize the efficiency of supercomputer processing. The softare-based approach will enable each operation to use less energy without adversely affecting performance. Considering a typical system consumes almost a megawatt of power, or about 200 households worth, there’s an environmental and economic incentive. It’s no surprise that increasing energy-efficiency has the added bonus of reducing operating costs.

If all goes according to schedule, the system will be delivered next month and will be up and running in May.

University of Florida Leads Pack in Reconfigurable Computing

The University of Florida is proclaiming itself as a leader in reconfigurable supercomputering. At the center of the claim is the university’s Novo-G supercomputer, the world’s fastest according to university officials. Although it relies on a different chip design, Novo-G can process certain applications faster than the Chinese Tianhe-1A system touted as world’s fastest, according the the most recent TOP500 list.

The TOP500 list does not include systems like Novo-G, which rely on the power of Field-programmable Gate Arrays (FPGAs) instead of so-called fixed-logic hardware structures like the more common CPU.

Reconfigurable machines, which rely on adaptive hardware customizations, are a fairly new innovation. FPGAs adapt to match the unique needs of each application, leading to increased speed and reduced energy requirements.

Alan George, professor of electrical and computer engineering, and director of the National Science Foundation’s Center for High-Performance Reconfigurable Computing, known as CHREC, explains that “it is very difficult to accurately rank supercomputers because it depends upon what you want them to do.”

Powered by 192 reconfigurable processors, Novo-G tackles a host of applications well-suited to the machine’s unique design. Scientists use the system to bolster research in fields such as health and life sciences, signal and image processing, and financial science.

A planned upgrade, scheduled for later this year, will double the reconfigurable capacity of Novo-G. University officials note that the upgrade requires “a modest increase in size, power, and cooling, unlike upgrades with conventional supercomputers.”

Puppet Pulls Strings on NICS Infrastructure

The National Institute for Computational Science (NICS) relies on Puppet to manage its many systems, including Kraken, the first academic petaflop supercomputer and the eighth top-rated system in the world. With Puppet, NICS can ensure the performance and security of its high-end computing resources.

Kraken, NICS flagship Cray XT5 system, contains 112,896 compute cores, 129 terabytes of memory, and 3.3 petabytes of raw disk space. The 1.7 petaflop supercomputer is accessed by 2,000 active researchers and contributes more than 700 million CPU hours per year to the TeraGrid.

Puppet gives NICS administators centralized control of their resources, which lets them apply system changes consistently to uphold security measures. Puppet has also significantly reduced server deployment times. Before, administrators had to maintain each server individually, a time-consuming process. With Puppet, what used to be a four to six hour job now takes just an hour. The saved time can be devoted to more important tasks, like maintaining an efficient infrastructure and staying abreast of updates and advances in technology.

Stephen McNally, HPC administrator with NICS, expressed satisfaction with the management system. “Twelve months ago we had no standard for managing our infrastructure; Puppet is now the standard. Our machines don’t go up until they’re in Puppet, tested, and working,” he said.

Web-Style Visualizations Promise More Meaningful Data

Rensselaer Polytechnic Institute Web experts Peter Fox and James Hendler are asking scientists to take a page from the Web when presenting their data. The two professors have written a perspective piece titled “Changing the Equation on Scientific Data Visualization” in which they recommend a new strategy for scientific visualizations, one that relies on the World Wide Web for inspiration.

That visualizations help unlock the mysteries of complex data is not being disputed, but Fox and Hendler believe they could be used more effectively.

The problem with the current use of visualization in the scientific community, according to [the duo], is that when visualizations are actually included by scientists, they are often an end product of research used to simply illustrate the results and are inconsistently incorporated into the entire scientific process. Their visualizations are also static and cannot be easily updated or modified when new information arises.

The Web provides a wealth of easy-to-use visualizations that scientists could use to add meaning to the data throughout the research process. Also these Web-based tools tend to be inexpensive, simple to use and easy to modify. For example, as new information comes in, the scenarios can be updated, which is often difficult when using more complex design tools.

According to the university announcement, “[s]imple Web-based visualization tool kits allow users to easily create maps, charts, graphs, word clouds, and other custom visualizations at little to no cost and with a few clicks of a mouse. In addition, Web links and RSS feeds allow visualizations on the Web to be updated with little to no involvement from the original developer of the visualization, greatly reducing the time and cost of the effort, but also keeping it dynamic.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

ASC18: Tough Applications & Tough Luck

May 17, 2018

The applications at the ASC18 Student Cluster Competition were tough. Tougher than the $3.99 steak special at your local greasy spoon restaurant. The apps are so tough that even Chuck Norris backs away from them slowly. Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and the technology challenges ahead. These discussions happened in Read more…

By Alex R. Larzelere

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

PRACE 2017 Annual Report: Exascale Aspirations; Industry Collaboration; HPC Training

May 15, 2018

The Partnership for Advanced Computing in Europe (PRACE) today released its annual report showcasing 2017 activities and providing a glimpse into thinking about Read more…

By John Russell

US Forms AI Brain Trust

May 11, 2018

Amid calls for a U.S. strategy for promoting AI development, the Trump administration is forming a senior-level panel to help coordinate government and industry research efforts. The Select Committee on Artificial Intelligence was announced Thursday (May 10) during a White House summit organized by the Office of Science and Technology Policy (OSTP). Read more…

By George Leopold

Emerging Advanced Scale Tech Trends Focus of Annual Tabor Conference

May 9, 2018

At Tabor Communications' annual Advanced Scale Forum (ASF) held this week in Austin, the focus was on enterprise adoption of HPC-class technologies and high performance data analytics (HPDA). It’s a confab that brings together end users (CIOs, IT planners, department heads) and vendors and encourages... Read more…

By the Editorial Team

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Leading Solution Providers

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This