The Week in HPC Research – 03/14/2013

By Tiffany Trader

March 14, 2013

The top research stories of the week have been hand-selected from leading scientific centers, prominent journals and relevant conference proceedings. Here’s another diverse set of items, including the just-announced 2012 Turing Prize winners; an examination of MIC acceleration in short-range molecular dynamics simulations; a new computer model to help predict the best HIV treatment; the role of atmospheric clouds in climate change models; and more reliable cloud computing.

Security Researchers Win Turing Prize

The Association for Computing Machinery (ACM) has named the 2012 Turning Prize winners. The esteemed award goes to Shafi Goldwasser of the Massachusetts Institute of Technology (MIT) and the Weizmann Institute of Science and Silvio Micali of MIT for their ground-breaking work in cryptography and complexity theory.

Goldwasser and Micali carried out pioneering research in field of provable security. Their work laid the mathematical foundations that made modern cryptography possible. The ACM observes that “by formalizing the concept that cryptographic security had to be computational rather than absolute, they created mathematical structures that turned cryptography from an art into a science.”

ACM President Vint Cerf provided additional details in a prepared statement. “The encryption schemes running in today’s browsers meet their notions of security,” he said of the duo. “The method of encrypting credit card numbers when shopping on the Internet also meets their test. We are indebted to these recipients for their innovative approaches to ensuring security in the digital age.”

So many of our daily activities are possible because of their research. According to Alfred Spector, vice president of Research and Special Initiatives at Google Inc., these achievements have changed how we work and live. Applications extend to ATM cards, computer passwords, electronic commerce and even electronic voting.

The Turing Prize has been called the “Nobel Prize in Computing.” It carries a $250,000 prize, funded by Intel Corporation and Google Inc.

Next >> MIC Acceleration

MIC Acceleration for Molecular Dynamics

A team of researchers from the National University of Defense Technology in Changsha, China, is investigating the use of MIC acceleration in short-range molecular dynamics simulations.

Their paper in the Proceedings of the First International Workshop on Code OptimiSation for MultI and many Cores (COSMIC’13) begins with the observation that heterogeneous systems built with accelerators (like GPUs) or coprocessors (like Intel MIC) are increasing in popularity. Such architectures are used for their ability to exploit large-scale parallelism.

In response to this evolving paradigm, the authors present a hierarchical parallelization scheme for molecular dynamics simulations on heterogeneous systems that combine CPU and MIC acceleration, specifically one 2.60GHZ eight-core Intel Xeon E5-2670 CPU and one 57-core Intel Knight Corner co-processor.

They propose to exploit multi-level parallelism by combining

(1) Task-level parallelism using a tightly-coupled division method

(2) Thread-level parallelism employing spatial-decomposition through dynamically scheduled multi-threading, and

(3) Data-level parallelism via SIMD technology.

The team reports optimum performance on the hybrid CPU-MIC system. They write: “by employing a hierarchy of parallelism with several optimization methods such as memory latency hiding and data pre-fetching, our MD code running on a CPU-MIC heterogeneous system…achieves (1) multi-thread parallel efficiency of 72.4% for 57 threads on the co-processor with up to 7.62 times SIMD speedup on each core for the force computation task, and (2) up to 2.25 times speedup on the CPU-MIC system over the pure CPU system, which outperforms our previous work on a CPU-GPU (one NVIDIA Tesla M2050) platform.”

Next >> Computer Modeling Benefits HIV Treatment

Computer Models Help Predict Response to HIV Drugs

New research published in the latest issue of Journal of Antimicrobial Chemotherapy could improve the treatment of HIV patients in resource-limited settings.

According to the study, the models can predict how HIV patients whose drug therapy is failing will respond to combination antiretroviral therapy (ART). Most notably for resource-constrained regions, the models do not require the expensive genotyping tests that are normally used to predict drug resistance. In effect, the researchers were able to create a model that predicted response to ART without a genotype with comparable accuracy to a genotyping-based assessment.

Julio Montaner, former President of the International AIDS Society, commented: “This is the first time this approach has been tried with real cases of treatment failure from resource-limited settings.”

Director of the BC Centre for Excellence in HIV & AIDS, based in Vancouver, Canada, and an author on the paper, said, “the results show that using sophisticated computer based algorithms we can effectively put the experience of treating thousands of patients into the hands of the under-resourced physician with potentially huge benefits.”

The models are available for free on the RDI website at http://www.hivrdi.org.

Next >> The Science of Clouds

The Science of Clouds – Real Clouds

Climate models continue to improve, and scientist are producing realistic representations of the oceans, ice, land surfaces and atmospheric conditions. However, a model will always have some degree of uncertainty, and when it comes to climate models, clouds pose the greatest challenge to accuracy.

As an article at Berkeley Lab News Center explains, “clouds can both cool the planet, by acting as a shield against the sun, and warm the planet, by trapping heat.”

Lawrence Berkeley National Laboratory scientist David Romps is investigating the behavior of clouds. He hopes to address why they act like they do and how their cover affects the temperatures of a planet.

“We don’t understand many basic things about clouds,” he says. “We don’t know why clouds rise at the speeds they do. We don’t know why they are the sizes they are. We lack a fundamental theory for what is a very peculiar case of fluid flow. There’s a lot of theory that remains to be done.”

The earth’s response to atmospheric levels of CO2 is studied using global climate models (GCMs) on lab supercomputers. At current computational limits, GCMs are restricted to modeling atmospheric samples less than 100 kilometers in size. However, convective clouds have sizes closer to 1 km, placing them outside the boundaries of GCMs. In response to this dilemma, climate scientists use submodels to resolve cloud behavior. It gets the job done, but comes with its own set of limitations, which Romp is chipping away at.

He’s already had some early successes. His theory that climate change, or rising temperatures, will result in fewer clouds was confirmed with a high-resolution model.

Next >> Reliable Cloud Computing

Making HPC Cloud Computing More Reliable

A team of computer scientists from Louisiana Tech University has contributed to the growing body of HPC cloud research, specifically as it relates to the reliability of cloud computing resources. Their paper, A Reliability Model for Cloud Computing for High Performance Computing Applications, was published in the book, Euro-Par 2012: Parallel Processing Workshops.

Cloud computing and virtualization allow resources to be used more efficiently. Public cloud resources are available on-demand and don’t require an expensive capital expenditure. But with an increase in both software and hardware components, comes a corresponding rise in server failure. The researchers assert that it’s important for service providers to understand the failure behavior of a cloud system, so they can better manage the resources. Much of their research applies specifically to the running of HPC applications on the cloud.

In the paper, the researchers “propose a reliability model for a cloud computing system that considers software, application, virtual machine, hypervisor, and hardware failures as well as correlation of failures within the software and hardware.”

They conclude failures caused by dependencies create a less reliable system, and as the failure rate of the system increases, the mean time to failure decreases. Not surprisingly, they also find that an increase in the number of nodes decreases the reliability of the system.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Companies D-Wave and Rigetti Again Face Stock Delisting

October 4, 2024

Both D-Wave (NYSE: QBTS) and Rigetti (Nasdaq: RGTI) are again facing stock delisting. This is a third time for D-Wave, which issued a press release today following notification by the SEC. Rigetti was notified of delisti Read more…

Alps Scientific Symposium Highlights AI’s Role in Tackling Science’s Biggest Challenges

October 4, 2024

ETH Zürich recently celebrated the launch of the AI-optimized “Alps” supercomputer with a scientific symposium focused on the future possibilities of scientific AI thanks to increased compute power and a flexible ar Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of its Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, the Read more…

IBM Quantum Summit Evolves into Developer Conference

October 2, 2024

Instead of its usual quantum summit this year, IBM will hold its first IBM Quantum Developer Conference which the company is calling, “an exclusive, first-of-its-kind.” It’s planned as an in-person conference at th Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed that the company will release Falcon Shores as a GPU. The com Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever ph Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

How GPUs Are Embedded in the HPC Landscape

September 23, 2024

Grasping the basics of Graphics Processing Unit (GPU) architecture is crucial for understanding how these powerful processors function, particularly in high-per Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire