Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
March 14, 2013

The Week in HPC Research

Tiffany Trader

The top research stories of the week have been hand-selected from leading scientific centers, prominent journals and relevant conference proceedings. Here’s another diverse set of items, including the just-announced 2012 Turing Prize winners; an examination of MIC acceleration in short-range molecular dynamics simulations; a new computer model to help predict the best HIV treatment; the role of atmospheric clouds in climate change models; and more reliable cloud computing.

Security Researchers Win Turing Prize

The Association for Computing Machinery (ACM) has named the 2012 Turning Prize winners. The esteemed award goes to Shafi Goldwasser of the Massachusetts Institute of Technology (MIT) and the Weizmann Institute of Science and Silvio Micali of MIT for their ground-breaking work in cryptography and complexity theory.

Goldwasser and Micali carried out pioneering research in field of provable security. Their work laid the mathematical foundations that made modern cryptography possible. The ACM observes that “by formalizing the concept that cryptographic security had to be computational rather than absolute, they created mathematical structures that turned cryptography from an art into a science.”

ACM President Vint Cerf provided additional details in a prepared statement. “The encryption schemes running in today’s browsers meet their notions of security,” he said of the duo. “The method of encrypting credit card numbers when shopping on the Internet also meets their test. We are indebted to these recipients for their innovative approaches to ensuring security in the digital age.”

So many of our daily activities are possible because of their research. According to Alfred Spector, vice president of Research and Special Initiatives at Google Inc., these achievements have changed how we work and live. Applications extend to ATM cards, computer passwords, electronic commerce and even electronic voting.

The Turing Prize has been called the “Nobel Prize in Computing.” It carries a $250,000 prize, funded by Intel Corporation and Google Inc.

Next >> MIC Acceleration

MIC Acceleration for Molecular Dynamics

A team of researchers from the National University of Defense Technology in Changsha, China, is investigating the use of MIC acceleration in short-range molecular dynamics simulations.

Their paper in the Proceedings of the First International Workshop on Code OptimiSation for MultI and many Cores (COSMIC’13) begins with the observation that heterogeneous systems built with accelerators (like GPUs) or coprocessors (like Intel MIC) are increasing in popularity. Such architectures are used for their ability to exploit large-scale parallelism.

In response to this evolving paradigm, the authors present a hierarchical parallelization scheme for molecular dynamics simulations on heterogeneous systems that combine CPU and MIC acceleration, specifically one 2.60GHZ eight-core Intel Xeon E5-2670 CPU and one 57-core Intel Knight Corner co-processor.

They propose to exploit multi-level parallelism by combining

(1) Task-level parallelism using a tightly-coupled division method

(2) Thread-level parallelism employing spatial-decomposition through dynamically scheduled multi-threading, and

(3) Data-level parallelism via SIMD technology.

The team reports optimum performance on the hybrid CPU-MIC system. They write: “by employing a hierarchy of parallelism with several optimization methods such as memory latency hiding and data pre-fetching, our MD code running on a CPU-MIC heterogeneous system…achieves (1) multi-thread parallel efficiency of 72.4% for 57 threads on the co-processor with up to 7.62 times SIMD speedup on each core for the force computation task, and (2) up to 2.25 times speedup on the CPU-MIC system over the pure CPU system, which outperforms our previous work on a CPU-GPU (one NVIDIA Tesla M2050) platform.”

Next >> Computer Modeling Benefits HIV Treatment

Computer Models Help Predict Response to HIV Drugs

New research published in the latest issue of Journal of Antimicrobial Chemotherapy could improve the treatment of HIV patients in resource-limited settings.

According to the study, the models can predict how HIV patients whose drug therapy is failing will respond to combination antiretroviral therapy (ART). Most notably for resource-constrained regions, the models do not require the expensive genotyping tests that are normally used to predict drug resistance. In effect, the researchers were able to create a model that predicted response to ART without a genotype with comparable accuracy to a genotyping-based assessment.

Julio Montaner, former President of the International AIDS Society, commented: “This is the first time this approach has been tried with real cases of treatment failure from resource-limited settings.”

Director of the BC Centre for Excellence in HIV & AIDS, based in Vancouver, Canada, and an author on the paper, said, “the results show that using sophisticated computer based algorithms we can effectively put the experience of treating thousands of patients into the hands of the under-resourced physician with potentially huge benefits.”

The models are available for free on the RDI website at

Next >> The Science of Clouds

The Science of Clouds – Real Clouds

Climate models continue to improve, and scientist are producing realistic representations of the oceans, ice, land surfaces and atmospheric conditions. However, a model will always have some degree of uncertainty, and when it comes to climate models, clouds pose the greatest challenge to accuracy.

As an article at Berkeley Lab News Center explains, “clouds can both cool the planet, by acting as a shield against the sun, and warm the planet, by trapping heat.”

Lawrence Berkeley National Laboratory scientist David Romps is investigating the behavior of clouds. He hopes to address why they act like they do and how their cover affects the temperatures of a planet.

“We don’t understand many basic things about clouds,” he says. “We don’t know why clouds rise at the speeds they do. We don’t know why they are the sizes they are. We lack a fundamental theory for what is a very peculiar case of fluid flow. There’s a lot of theory that remains to be done.”

The earth’s response to atmospheric levels of CO2 is studied using global climate models (GCMs) on lab supercomputers. At current computational limits, GCMs are restricted to modeling atmospheric samples less than 100 kilometers in size. However, convective clouds have sizes closer to 1 km, placing them outside the boundaries of GCMs. In response to this dilemma, climate scientists use submodels to resolve cloud behavior. It gets the job done, but comes with its own set of limitations, which Romp is chipping away at.

He’s already had some early successes. His theory that climate change, or rising temperatures, will result in fewer clouds was confirmed with a high-resolution model.

Next >> Reliable Cloud Computing

Making HPC Cloud Computing More Reliable

A team of computer scientists from Louisiana Tech University has contributed to the growing body of HPC cloud research, specifically as it relates to the reliability of cloud computing resources. Their paper, A Reliability Model for Cloud Computing for High Performance Computing Applications, was published in the book, Euro-Par 2012: Parallel Processing Workshops.

Cloud computing and virtualization allow resources to be used more efficiently. Public cloud resources are available on-demand and don’t require an expensive capital expenditure. But with an increase in both software and hardware components, comes a corresponding rise in server failure. The researchers assert that it’s important for service providers to understand the failure behavior of a cloud system, so they can better manage the resources. Much of their research applies specifically to the running of HPC applications on the cloud.

In the paper, the researchers “propose a reliability model for a cloud computing system that considers software, application, virtual machine, hypervisor, and hardware failures as well as correlation of failures within the software and hardware.”

They conclude failures caused by dependencies create a less reliable system, and as the failure rate of the system increases, the mean time to failure decreases. Not surprisingly, they also find that an increase in the number of nodes decreases the reliability of the system.

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video