The Week in HPC Research

By Tiffany Trader

April 25, 2013

We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes advancements in petascale-era development environments; the challenges of energy-efficiency in HPC; optimizing computer science instruction; and a possible path to extreme heterogeneity.

A Scalable Development Environment for Petascale Era

The Juelich Supercomputing Centre (JSC) at Forschungszentrum Juelich GmbH, in Germany, has released the final scientific report detailing its efforts to develop “A scalable Development Environment for Peta-Scale Computing.” The goal of the project was to extend the Parallel Tools Platform (PTP) – an integrated development environment for parallel applications – to meet the needs of current-era petascale systems. PTP covers code analysis, performance tuning, parallel debugging and system monitoring.

The role of the Juelich Supercomputing Centre (JSC) was to provide a scalable system modeling solution for today’s supercomputers. This meant developing a new communication protocol for status data to be exchanged between the target remote system and the client running PTP. Remote support was essential as PTP provides transparent access to multiple remote systems via a unified interface.

The nature of the challenge is described thusly:

“The common requirement for all PTP components is that they have to interact with the remote supercomputer, e.g., applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time.”

The remainder of the paper describes the process by which JSC developed the new monitoring component and successfully integrated it into PTP. The solution is now being used on JSC’s BlueGene/Q system JUQUEEN, as well as its general purpose cluster JUROPA and its GPU cluster JUDGE. It’s also been successfully applied to Jaguar, the Cray supercomputer maintained by the Oak Ridge National Laboratory (now part of Titan), and various XSEDE machines, including the Kraken and Keeneland systems at the National Institute for Computational Sciences, the Lonestar and Ranger systems at Texas Advanced Computing Center, as well as Argonne National Laboratory’s Blue Gene/P and Q.

Next >> A Balancing Act

Energy-Efficiency: A Balancing Act

Another research paper released this week demonstrates novel energy savings strategies for parallel applications by way of point to point communication phases.

“Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale,” state the four-person research team (from Iowa State University and Old Dominion University, Norfolk, Va.).

Fig. 2. State diagram for runtime procedure to apply energy savings efficiently. The transitions are labeled with Lt, where L takes a value of the first 11 letters of the alphabet. The transition of a state into itself (At,Et,Ft,It) indicate ongoing state action.

The authors explore several frequency scaling strategies aimed at saving energy:

“In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This paper advocates for a runtime assessment of such overheads by means of characterizing point-to-point communications into phases followed by analyzing the time gaps between the communication calls.”

The tests employ NAS parallel benchmark problems and calculations performed by the quantum chemistry software package GAMESS. In the final analysis, the team achieved close to the maximum energy savings, however there was a small performance loss of 2 percent.

Their work appears in the latest edition of the Journal of Parallel and Distributed Computing.

Next >> Greener Supercomputing

A Power Efficient General Purpose Supercomputer

The power wall is one of the biggest challenges facing the HPC community. While these mega-machines are essential to research and business, they also are also big energy consumers. This issue, however is getting a lot of attention, and optimizing performance-per-watt has become a key goal of the computing industry at all levels.

A team of UK researchers have written about the advances that will be needed over the coming years, observing that achieving a “pervasively energy-efficient” supercomputing architecture will require improvements in multiple fields. They believe that the LOEWE-CSC supercomputer at the University of Frankfurt, Germany, has already made a lot of headway in meeting these goals. That system, they write, “is setting new standards in environmental compatibility as well as energy and cooling efficiency for high-performance and general-purpose computing.”

The team notes that GPUs provide more compute performance per watt versus standard processors, while “a balanced hardware configuration ensures that most of the compute power is available to the user when he employs optimized applications.” As well: “clever algorithms enable the user to fully exploit the computational potential and avoids to waste power when the processors idles, which is often a cause of inefficient programming.”

The LOEWE-CSC supercomputer achieved 740 MFlops-per-watt on a Linpack benchmark run, earning it an eighth place finish on the Green500 list of November 2010. A good metric for the time, it has since been surpassed by more energy-efficient systems and has fallen to 109th position on the most recent Green500 list (November 2012).

The work appears in proceedings from the 21st Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, 2013.

Next >> Evaluating Student Progress

Evaluating Student Understanding of Core Concepts in Computer Architecture

Four researchers from across the country have written a paper that is sure to resonate with anyone who’s ever taken or taught a computer science course. In “Evaluating Student Understanding of Core Concepts in Computer Architecture,” the authors begin with the assertion: “Many studies have demonstrated that students tend to learn less than instructors expect in CS1.”

The researchers wondered whether these findings would hold true for subsequent, upper-level computer science courses, and set out to test their hypothesis.

Multiple computer architecture instructors developed basic concept questions for upper-division computer architecture courses. The questions were designed to test students’ minimum proficiency levels post-course and the expectation was that every student would be able to answer the questions. The tests were used to assess four separate computer architecture courses (taught by four different teachers) at two institutions, a large public university and a small liberal arts college.

The results in the authors’ words: “Our results show that students in these courses were indeed not learning as much as the instructors expected, performing poorly overall: the per-question average was only 56%, with many questions showing no statistically significant improvement from precourse to post-course. While these results follow the trend from CS1 courses, they are still somewhat surprising given that the courses studied were taught using research-based pedagogy that is known to be effective across the CS curriculum.”

The paper includes a discussion of the findings as well as recommendations for further study. While this may come as “bad news,” pinpointing the most difficult subject matter will help course instructors refine their lessons (see the Recommendations section for more on this topic).

It’s no question these findings are significant – one wonders how surprising they will be to the HPC or larger computer science community.

This paper opens the door for further discourse on this important subject.

Next >> Extreme Heterogeneity

Extreme Heterogeneity

The last decade has seen continuing push toward heterogenous architectures, but is there a more extreme form of heterogeneity still to come? There is according to one group of computer scientists. The diverse research team, with affiliations that include Microsoft as well as US, Mexican, European and Asian universities, presented a paper on the subject at the International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN’ 2012) in San Marcos, Texas, December 13–15, 2012.

In “Introducing the Extreme Heterogeneous Architecture,” they write:

“The computer industry is moving towards two extremes: extremely high-performance high-throughput cloud computing, and low-power mobile computing. Cloud computing, while providing high performance, is very costly. Google and Microsoft Bing spend billions of dollars each year to maintain their server farms, mainly due to the high power bills. On the other hand, mobile computing is under a very tight energy budget, but yet the end users demand ever increasing performance on these devices.”

Conventional architectures have diverged to meet the needs of multiple user groups. But wouldn’t it be ideal if there was a way to deliver high-performance and low power consumption at the same time? The authors set out to explore a novel architecture model that addresses both these extremes, setting the stage for the Extremely Heterogeneous Architecture (EHA) project.

“EHA is a novel architecture that incorporates both general-purpose and specialized cores on the same chip,” the authors explain. “The general-purpose cores take care of generic control and computation. On the other hand, the specialized cores, including GPU, hard accelerators (ASIC accelerators), and soft accelerators (FPGAs), are designed for accelerating frequently used or heavy weight applications. When acceleration is not needed, the specialized cores are turned off to reduce power consumption. We demonstrate that EHA is able to improve performance through acceleration, and at the same time reduce power consumption.”

As a heterogeneous architecture, EHA is capable of accelerating heterogeneous workloads on the same chip. This is useful because it is often the case that datacenters (either in-house or in “the cloud”) provide many services – media streaming, searching, indexing, scientific computations, and so on.

The EHA project has two main goals. The first one is to design a chip that is suitable for many different cloud services, thereby greatly reducing both recurring and non-recurring costs of datacenters or clouds. Second, they plan to implement a light-weight EHA for use with mobile devices, with the aim of optimizing user experience under tight power constraints.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in 2017 with scale-up production for enterprise datacenters and Read more…

By Tiffany Trader

Fine-Tuning Severe Hail Forecasting with Machine Learning

July 20, 2017

Depending on whether you’ve been caught outside during a severe hail storm, the sight of greenish tinted clouds on the horizon may cause serious knots in the pit of your stomach, or at least give you pause. There’s g Read more…

By Sean Thielen

Trinity Supercomputer’s Haswell and KNL Partitions Are Merged

July 19, 2017

Trinity supercomputer’s two partitions – one based on Intel Xeon Haswell processors and the other on Xeon Phi Knights Landing – have been fully integrated are now available for use on classified work in the Nationa Read more…

By HPCwire Staff

Fujitsu Continues HPC, AI Push

July 19, 2017

Summer is well under way, but the so-called summertime slowdown, linked with hot temperatures and longer vacations, does not seem to have impacted Fujitsu's output. The Japanese multinational has made a raft of HPC and A Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Servers Deliver High Performance Remote Visualization

Whether generating seismic simulations, locating new productive oil reservoirs, or constructing complex models of the earth’s subsurface, energy, oil, and gas (EO&G) is a highly data-driven industry. Read more…

Researchers Use DNA to Store and Retrieve Digital Movie

July 18, 2017

From abacus to pencil and paper to semiconductor chips, the technology of computing has always been an ever-changing target. The human brain is probably the computer we use most (hopefully) and understand least. This mon Read more…

By John Russell

The Exascale FY18 Budget – The Next Step

July 17, 2017

On July 12, 2017, the U.S. federal budget for its Exascale Computing Initiative (ECI) took its next step forward. On that day, the full Appropriations Committee of the House of Representatives voted to accept the recomme Read more…

By Alex R. Larzelere

Summer Reading: IEEE Spectrum’s Chip Hall of Fame

July 17, 2017

Take a trip down memory lane – the Mostek MK4096 4-kilobit DRAM, for instance. Perhaps processors are more to your liking. Remember the Sh-Boom processor (1988), created by Russell Fish and Chuck Moore, and named after Read more…

By John Russell

Women in HPC Luncheon Shines Light on Female-Friendly Hiring Practices

July 13, 2017

The second annual Women in HPC luncheon was held on June 20, 2017, during the International Supercomputing Conference in Frankfurt, Germany. The luncheon provides participants the opportunity to network with industry lea Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Fine-Tuning Severe Hail Forecasting with Machine Learning

July 20, 2017

Depending on whether you’ve been caught outside during a severe hail storm, the sight of greenish tinted clouds on the horizon may cause serious knots in the Read more…

By Sean Thielen

Fujitsu Continues HPC, AI Push

July 19, 2017

Summer is well under way, but the so-called summertime slowdown, linked with hot temperatures and longer vacations, does not seem to have impacted Fujitsu's out Read more…

By Tiffany Trader

Researchers Use DNA to Store and Retrieve Digital Movie

July 18, 2017

From abacus to pencil and paper to semiconductor chips, the technology of computing has always been an ever-changing target. The human brain is probably the com Read more…

By John Russell

The Exascale FY18 Budget – The Next Step

July 17, 2017

On July 12, 2017, the U.S. federal budget for its Exascale Computing Initiative (ECI) took its next step forward. On that day, the full Appropriations Committee Read more…

By Alex R. Larzelere

Women in HPC Luncheon Shines Light on Female-Friendly Hiring Practices

July 13, 2017

The second annual Women in HPC luncheon was held on June 20, 2017, during the International Supercomputing Conference in Frankfurt, Germany. The luncheon provid Read more…

By Tiffany Trader

Satellite Advances, NSF Computation Power Rapid Mapping of Earth’s Surface

July 13, 2017

New satellite technologies have completely changed the game in mapping and geographical data gathering, reducing costs and placing a new emphasis on time series Read more…

By Ken Chiacchia and Tiffany Jolley

Intel Skylake: Xeon Goes from Chip to Platform

July 13, 2017

With yesterday’s New York unveiling of the new “Skylake” Xeon Scalable processors, Intel made multiple runs at multiple competitive threats and strategic Read more…

By Doug Black

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This