The Week in HPC Research

By Tiffany Trader

April 25, 2013

We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes advancements in petascale-era development environments; the challenges of energy-efficiency in HPC; optimizing computer science instruction; and a possible path to extreme heterogeneity.

A Scalable Development Environment for Petascale Era

The Juelich Supercomputing Centre (JSC) at Forschungszentrum Juelich GmbH, in Germany, has released the final scientific report detailing its efforts to develop “A scalable Development Environment for Peta-Scale Computing.” The goal of the project was to extend the Parallel Tools Platform (PTP) – an integrated development environment for parallel applications – to meet the needs of current-era petascale systems. PTP covers code analysis, performance tuning, parallel debugging and system monitoring.

The role of the Juelich Supercomputing Centre (JSC) was to provide a scalable system modeling solution for today’s supercomputers. This meant developing a new communication protocol for status data to be exchanged between the target remote system and the client running PTP. Remote support was essential as PTP provides transparent access to multiple remote systems via a unified interface.

The nature of the challenge is described thusly:

“The common requirement for all PTP components is that they have to interact with the remote supercomputer, e.g., applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time.”

The remainder of the paper describes the process by which JSC developed the new monitoring component and successfully integrated it into PTP. The solution is now being used on JSC’s BlueGene/Q system JUQUEEN, as well as its general purpose cluster JUROPA and its GPU cluster JUDGE. It’s also been successfully applied to Jaguar, the Cray supercomputer maintained by the Oak Ridge National Laboratory (now part of Titan), and various XSEDE machines, including the Kraken and Keeneland systems at the National Institute for Computational Sciences, the Lonestar and Ranger systems at Texas Advanced Computing Center, as well as Argonne National Laboratory’s Blue Gene/P and Q.

Next >> A Balancing Act

Energy-Efficiency: A Balancing Act

Another research paper released this week demonstrates novel energy savings strategies for parallel applications by way of point to point communication phases.

“Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale,” state the four-person research team (from Iowa State University and Old Dominion University, Norfolk, Va.).

Fig. 2. State diagram for runtime procedure to apply energy savings efficiently. The transitions are labeled with Lt, where L takes a value of the first 11 letters of the alphabet. The transition of a state into itself (At,Et,Ft,It) indicate ongoing state action.

The authors explore several frequency scaling strategies aimed at saving energy:

“In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This paper advocates for a runtime assessment of such overheads by means of characterizing point-to-point communications into phases followed by analyzing the time gaps between the communication calls.”

The tests employ NAS parallel benchmark problems and calculations performed by the quantum chemistry software package GAMESS. In the final analysis, the team achieved close to the maximum energy savings, however there was a small performance loss of 2 percent.

Their work appears in the latest edition of the Journal of Parallel and Distributed Computing.

Next >> Greener Supercomputing

A Power Efficient General Purpose Supercomputer

The power wall is one of the biggest challenges facing the HPC community. While these mega-machines are essential to research and business, they also are also big energy consumers. This issue, however is getting a lot of attention, and optimizing performance-per-watt has become a key goal of the computing industry at all levels.

A team of UK researchers have written about the advances that will be needed over the coming years, observing that achieving a “pervasively energy-efficient” supercomputing architecture will require improvements in multiple fields. They believe that the LOEWE-CSC supercomputer at the University of Frankfurt, Germany, has already made a lot of headway in meeting these goals. That system, they write, “is setting new standards in environmental compatibility as well as energy and cooling efficiency for high-performance and general-purpose computing.”

The team notes that GPUs provide more compute performance per watt versus standard processors, while “a balanced hardware configuration ensures that most of the compute power is available to the user when he employs optimized applications.” As well: “clever algorithms enable the user to fully exploit the computational potential and avoids to waste power when the processors idles, which is often a cause of inefficient programming.”

The LOEWE-CSC supercomputer achieved 740 MFlops-per-watt on a Linpack benchmark run, earning it an eighth place finish on the Green500 list of November 2010. A good metric for the time, it has since been surpassed by more energy-efficient systems and has fallen to 109th position on the most recent Green500 list (November 2012).

The work appears in proceedings from the 21st Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, 2013.

Next >> Evaluating Student Progress

Evaluating Student Understanding of Core Concepts in Computer Architecture

Four researchers from across the country have written a paper that is sure to resonate with anyone who’s ever taken or taught a computer science course. In “Evaluating Student Understanding of Core Concepts in Computer Architecture,” the authors begin with the assertion: “Many studies have demonstrated that students tend to learn less than instructors expect in CS1.”

The researchers wondered whether these findings would hold true for subsequent, upper-level computer science courses, and set out to test their hypothesis.

Multiple computer architecture instructors developed basic concept questions for upper-division computer architecture courses. The questions were designed to test students’ minimum proficiency levels post-course and the expectation was that every student would be able to answer the questions. The tests were used to assess four separate computer architecture courses (taught by four different teachers) at two institutions, a large public university and a small liberal arts college.

The results in the authors’ words: “Our results show that students in these courses were indeed not learning as much as the instructors expected, performing poorly overall: the per-question average was only 56%, with many questions showing no statistically significant improvement from precourse to post-course. While these results follow the trend from CS1 courses, they are still somewhat surprising given that the courses studied were taught using research-based pedagogy that is known to be effective across the CS curriculum.”

The paper includes a discussion of the findings as well as recommendations for further study. While this may come as “bad news,” pinpointing the most difficult subject matter will help course instructors refine their lessons (see the Recommendations section for more on this topic).

It’s no question these findings are significant – one wonders how surprising they will be to the HPC or larger computer science community.

This paper opens the door for further discourse on this important subject.

Next >> Extreme Heterogeneity

Extreme Heterogeneity

The last decade has seen continuing push toward heterogenous architectures, but is there a more extreme form of heterogeneity still to come? There is according to one group of computer scientists. The diverse research team, with affiliations that include Microsoft as well as US, Mexican, European and Asian universities, presented a paper on the subject at the International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN’ 2012) in San Marcos, Texas, December 13–15, 2012.

In “Introducing the Extreme Heterogeneous Architecture,” they write:

“The computer industry is moving towards two extremes: extremely high-performance high-throughput cloud computing, and low-power mobile computing. Cloud computing, while providing high performance, is very costly. Google and Microsoft Bing spend billions of dollars each year to maintain their server farms, mainly due to the high power bills. On the other hand, mobile computing is under a very tight energy budget, but yet the end users demand ever increasing performance on these devices.”

Conventional architectures have diverged to meet the needs of multiple user groups. But wouldn’t it be ideal if there was a way to deliver high-performance and low power consumption at the same time? The authors set out to explore a novel architecture model that addresses both these extremes, setting the stage for the Extremely Heterogeneous Architecture (EHA) project.

“EHA is a novel architecture that incorporates both general-purpose and specialized cores on the same chip,” the authors explain. “The general-purpose cores take care of generic control and computation. On the other hand, the specialized cores, including GPU, hard accelerators (ASIC accelerators), and soft accelerators (FPGAs), are designed for accelerating frequently used or heavy weight applications. When acceleration is not needed, the specialized cores are turned off to reduce power consumption. We demonstrate that EHA is able to improve performance through acceleration, and at the same time reduce power consumption.”

As a heterogeneous architecture, EHA is capable of accelerating heterogeneous workloads on the same chip. This is useful because it is often the case that datacenters (either in-house or in “the cloud”) provide many services – media streaming, searching, indexing, scientific computations, and so on.

The EHA project has two main goals. The first one is to design a chip that is suitable for many different cloud services, thereby greatly reducing both recurring and non-recurring costs of datacenters or clouds. Second, they plan to implement a light-weight EHA for use with mobile devices, with the aim of optimizing user experience under tight power constraints.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Wind Farms, Gravitational Lenses, Web Portals & More

February 19, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from the nanoscale to the astronomic, from calculating quantum effe Read more…

By Ken Strandberg

What Will IBM’s AI Debater Learn from Its Loss?

February 14, 2019

The utility of IBM’s latest man-versus-machine gambit is debatable. At the very least its Project Debater got us thinking about the potential uses of artificial intelligence as a way of helping humans sift through al Read more…

By George Leopold

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Medical Research Powered by Data

“We’re all the same, but we’re unique as well. In that uniqueness lies all of the answers….”

  • Mark Tykocinski, MD, Provost, Executive Vice President for Academic Affairs, Thomas Jefferson University

Getting the answers to what causes some people to develop diseases and not others is driving the groundbreaking medical research being conducted by the Computational Medicine Center at Thomas Jefferson University in Philadelphia. Read more…

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst of bankruptcy proceedings. According to Dutch news site Drimb Read more…

By Tiffany Trader

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from th Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Assessing Government Shutdown’s Impact on HPC

February 6, 2019

After a 35-day federal government shutdown, the longest in U.S. history, government agencies are taking stock of the damage -- and girding for a potential secon Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This