Déjà Vu All Over Again

By Nicole Hemsoth

November 16, 2009

Steve Wallach, a supercomputing legend and recipient of the 2008 IEEE Seymour Cray Award, has participated in all 22 supercomputing shows. He is known for his contributions to high performance computing through the design of innovative vector and parallel computing systems. He is co-founder and chief science officer for Convey Computer Corp., a new company with a hybrid-core computer that marries the low cost and simple programming model of a commodity system with the performance of customized hardware architecture.

Never short on opinions, especially when it comes to high performance computing, Steve Wallach talked to HPCwire about the future of HPC and how lessons from the past can point the way for the future.

HPCwire: There’s been a lot of talk about how recent architecture advancements will bring GPU computing into the mainstream for high performance computing with significant speedups and energy savings. You disagree. Why?

Steve Wallach: GPUs are an interesting technology and some applications will probably see significant speed-up, but I don’t see them in the mainstream. Here’s why: programmers will have to put in a lot of effort to get the speed-up. Real-world applications consist of millions of lines of code, and organizations have invested too much money in those programs. If you tell them they have to modify those programs to use your technology, you lose. And it’s not just the software that has to be changed; it is the entire programming eco-structure: debuggers, profilers, and programming memory. Anything that disturbs those underlying realities is destined to become a niche player. This is the biggest difference between an accelerator and a coprocessor. A coprocessor is an extension to the instruction set and is part of the same environment. GPUs are not. A GPU consists of two different programming environments and you have to move the data back and forth between them to get the benefits. The host cannot see the memory of the GPU; there are two separate address spaces.

It’s similar to what we saw with attached array processors in the 80s. What we saw back then that you had to explicitly move and manage the data — which reduced programmer productivity, raised actual cost of ownership and ultimately reduced performance. Like back then, the GPU programming model is different from its host.

GPUs initially did not have ECC correctable memory, now they do. This, however, demonstrates their lack of general purpose computing requirements. You have to work hard to make it work and not every application is amenable. The memory structure of a GPU is meant to be optimal for sequential access, but many programs require non-unity stride which will reduce performance for those applications. Classical supercomputers from Cray, Convex, NEC, and Fujitsu had very high bandwidth, highly interleaved main memory. A GPU is not going to be a general-purpose or a wide-spread solution for technical and software reasons. You can only execute the “hot spot” on the GPU, for example, and still need a classical host like the x86. It is not an integrated system. And, as of now, GPUs do not support virtual memory.

The GPU is really just a contemporary version of an attached array processor. If you look at the last 30 years, the architectures that have succeeded in the long term have been the ones that are easiest to program and that fit into the current environment. New languages take time to be learned and adopted. Organizations can’t hire the right people to program the machines. Each new full-time equivalent programmer who has to be hired can easily add $200,000 to $300,000 to the costs of the new system per year. This is not a new phenomenon; it has been true for the past few decades. The time to reconfigure is really expensive.

HPCwire: You’ve said that “software is the ‘Trojan Horse’ of high-performance computing.” What do you mean by that?

Wallach: As an organization, you accept the hardware — the horse — and then the next day the software warriors pour out and devour your IT department. As technology enthusiasts, we get excited by new technologies based on peak performance micro-architecture and the software questions come later as well as questions about “how do I fit it into my environment?” and “will I be able to achieve this level of performance with my applications?”

This has been true for the last 30 and will be true for the next 30 years. If you go back to the 80s — you had all kinds of interesting technologies like array processors and others, but the ones that had the best software succeeded such as Convex, Cray and Alliant. They succeeded because the programmer could leverage the technology for their FORTRAN and C environments. Integrated solutions like these succeeded and companies like CDC failed because their software was part of an anemic development environment. As another example from the past, the Japanese (Fujitsu and NEC) had exceptional software environments.
Fast forward to today. It’s like déjà vu all over again. A lot of new technologies are evolving but are not dealing with the software environment. Previous FPGA vendors had this problem. They were not integrated with the host environment. Vector processors, such as ClearSpeed, have this problem and this is true of all accelerators and GPUs.

The GPUs have some great technologies for visualization, for example, but are not integrated. You have to learn how to program in new languages like CUDA and there aren’t a lot of major applications written in CUDA. Programmers have to re-code or set up source translators that facilitate FORTRAN to CUDA. There are no translators for FORTRAN to assembly code and from a technical perspective it is much more efficient to go from FORTRAN to assembly code. Source to source translators are NOT as efficient as compilation to assembly code.

HPCwire: You talk about Convey’s hybrid-core computer as being an application specific, low power node. What is the significance of this description to the market?

Wallach: In the past decade, every generation has added new, specific instructions to general purpose computers to speed performance. For example, the current x86 system enhances image processing and new instructions have been developed to enhance vector processing. Since clock cycles are basically flat, you will see the trend toward specific instructions built into the microprocessors increasing. If one instruction can replace 10 instructions, you will have reduced power for that application. Our view is that it is now time to step up and increase the functionality of this approach. We advocate having one instruction to replace 100 instructions. Now you don’t have to rely on clock cycles to increase performance. You are relying instead on data and control paths. This approach is extremely useful for Convey and allows us to significantly increase performance while reducing power requirements, footprint and overall facility costs for a data center.

HPCwire: In order to be successful, do you think new computing paradigms need to leverage existing eco-structures like Linux and Windows?

Wallach: Absolutely. As I said before, new languages mean higher costs and lower productivity. In VC deals, whenever I hear that you have to program in a new language to make it work, I turn it down.

With new computing paradigms, you get several benefits when they leverage existing eco-structures like Linux and Windows. First off, they are more easily acceptable in the marketplace. If I’m the data center manager, I don’t have to hire anyone new or have training for a new eco-structure. No need to program in OCCAM, for example. I call programs that don’t take into consideration legacy systems and that are obscenely difficult to integrate, “pornographic” programs — you can’t always describe them exactly, but you know them when you see them. In 1984, I converted a FORTRAN program from CDC to ANSI FORTRAN to see what they were doing and it was awful. In the contemporary world, CUDA is the new pornographic programming language.

In addition, Windows and Linux allow for adoption of related technologies from other industries without changing the programming environment. Industry innovators such as the researchers at Lawrence Berkeley National Laboratory believe, for example, that future supercomputers will use the processors found in cell phones and other hand-held devices. Why? Because they use so little energy and have proven that they can handle sophisticated tasks (October 2009, IEEE Spectrum: “Low-Power Supercomputers” ). It is easy for the manufacturers to build chips designed for specific HPC applications just like they build different chips for each Smartphone brand. Chip manufacturers will also provide the software — compilers, debuggers, profiling tools, even complete Linux operating systems — tailored to each specific chip they sell which will make the new systems easy to integrate into a current environment.

HPCwire: Last year in HPCwire you said the future of HPC involves improved software, in particular more widespread use of PGAS languages and optical interconnects. Is this still the case?

Wallach: Yes. I believe the need for optical interconnects increases as we build large systems. The efficiency of scaling in parallel processing has to do with bandwidth and latency. Optical interconnects are much more efficient in terms of speed and power as compared to copper. PGAS (partitioned global address space) languages allow programmers a global view of their dataset and are much more efficient. PGAS languages also make it much easier to program highly parallel systems — they are much better than MPI.

HPCwire: Speaking of software, where is Convey on its development of different software personalities?

Wallach: We are on track with our development of personalities. Convey’s personalities are application architectures and instruction sets that support a wide array of application-specific solutions. Rather than develop hundreds of unique applications, we a creating a manageable number of personalities that can be leveraged in hundreds of different ways. We’ve shipped a range of different personalities for different customers, and we’ve got several others in development.

In the end, we anticipate developing around a dozen different core personalities. This is consistent with what leading researchers have determined, also. For example, in the study published by the University of California at Berkeley, “The Landscape of Parallel Computing Research: A View from Berkeley,” researchers define what they call MOTIFs or computer application structures for HPC. They describe 13 computer application structures on the Y access with the X access representing a particular application and how it uses that structure. Berkeley’s view is consistent with ours that there are approximately a dozen different personalities that cover the full spectrum of computing. In our development, we add a third element to the equation — the memory system — and see this as a three-dimensional grid. For this case, a unity stride (access sequential elements — dense data); or a highly interleaved (access non-sequential elements — multiple independently accessible memory banks — sparse data); or a “smart” memory (PIM – perform specific operations in the memory system — thread based) system is required for optimal performance.

We are on track to have personalities with memory structure and instruction sets with these MOTIFs, which is where we believe computing is going. For the HC-1, we ultimately anticipate 13 MOTIFs — but some will use the same personality.

HPCwire: Convey has just started shipping production units, can you tell us about the company’s early customers and how they’re using the HC-1?

Wallach: Early applications for the HC-1 follow the classic profile of HPC applications: signal-image processing, computer simulations, bioinformatics, and other applications we can’t discuss at this time. We have HC-1s going into the world’s leading research labs, all of which we will talk about during SC09 at our booth.

You can catch up with Steve Wallach during SC09, where he is participating in a talk on “HPC Architectures: Future Technologies and Systems” from 1:30-2:00 p.m. on Thursday (Rm. E143-144); or at Convey’s booth (#2589).

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Symposium on Computer Architecture (ISCA) in Los Angeles. The Read more…

By Staff

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This