Déjà Vu All Over Again

By Nicole Hemsoth

November 16, 2009

Steve Wallach, a supercomputing legend and recipient of the 2008 IEEE Seymour Cray Award, has participated in all 22 supercomputing shows. He is known for his contributions to high performance computing through the design of innovative vector and parallel computing systems. He is co-founder and chief science officer for Convey Computer Corp., a new company with a hybrid-core computer that marries the low cost and simple programming model of a commodity system with the performance of customized hardware architecture.

Never short on opinions, especially when it comes to high performance computing, Steve Wallach talked to HPCwire about the future of HPC and how lessons from the past can point the way for the future.

HPCwire: There’s been a lot of talk about how recent architecture advancements will bring GPU computing into the mainstream for high performance computing with significant speedups and energy savings. You disagree. Why?

Steve Wallach: GPUs are an interesting technology and some applications will probably see significant speed-up, but I don’t see them in the mainstream. Here’s why: programmers will have to put in a lot of effort to get the speed-up. Real-world applications consist of millions of lines of code, and organizations have invested too much money in those programs. If you tell them they have to modify those programs to use your technology, you lose. And it’s not just the software that has to be changed; it is the entire programming eco-structure: debuggers, profilers, and programming memory. Anything that disturbs those underlying realities is destined to become a niche player. This is the biggest difference between an accelerator and a coprocessor. A coprocessor is an extension to the instruction set and is part of the same environment. GPUs are not. A GPU consists of two different programming environments and you have to move the data back and forth between them to get the benefits. The host cannot see the memory of the GPU; there are two separate address spaces.

It’s similar to what we saw with attached array processors in the 80s. What we saw back then that you had to explicitly move and manage the data — which reduced programmer productivity, raised actual cost of ownership and ultimately reduced performance. Like back then, the GPU programming model is different from its host.

GPUs initially did not have ECC correctable memory, now they do. This, however, demonstrates their lack of general purpose computing requirements. You have to work hard to make it work and not every application is amenable. The memory structure of a GPU is meant to be optimal for sequential access, but many programs require non-unity stride which will reduce performance for those applications. Classical supercomputers from Cray, Convex, NEC, and Fujitsu had very high bandwidth, highly interleaved main memory. A GPU is not going to be a general-purpose or a wide-spread solution for technical and software reasons. You can only execute the “hot spot” on the GPU, for example, and still need a classical host like the x86. It is not an integrated system. And, as of now, GPUs do not support virtual memory.

The GPU is really just a contemporary version of an attached array processor. If you look at the last 30 years, the architectures that have succeeded in the long term have been the ones that are easiest to program and that fit into the current environment. New languages take time to be learned and adopted. Organizations can’t hire the right people to program the machines. Each new full-time equivalent programmer who has to be hired can easily add $200,000 to $300,000 to the costs of the new system per year. This is not a new phenomenon; it has been true for the past few decades. The time to reconfigure is really expensive.

HPCwire: You’ve said that “software is the ‘Trojan Horse’ of high-performance computing.” What do you mean by that?

Wallach: As an organization, you accept the hardware — the horse — and then the next day the software warriors pour out and devour your IT department. As technology enthusiasts, we get excited by new technologies based on peak performance micro-architecture and the software questions come later as well as questions about “how do I fit it into my environment?” and “will I be able to achieve this level of performance with my applications?”

This has been true for the last 30 and will be true for the next 30 years. If you go back to the 80s — you had all kinds of interesting technologies like array processors and others, but the ones that had the best software succeeded such as Convex, Cray and Alliant. They succeeded because the programmer could leverage the technology for their FORTRAN and C environments. Integrated solutions like these succeeded and companies like CDC failed because their software was part of an anemic development environment. As another example from the past, the Japanese (Fujitsu and NEC) had exceptional software environments.
 
Fast forward to today. It’s like déjà vu all over again. A lot of new technologies are evolving but are not dealing with the software environment. Previous FPGA vendors had this problem. They were not integrated with the host environment. Vector processors, such as ClearSpeed, have this problem and this is true of all accelerators and GPUs.

The GPUs have some great technologies for visualization, for example, but are not integrated. You have to learn how to program in new languages like CUDA and there aren’t a lot of major applications written in CUDA. Programmers have to re-code or set up source translators that facilitate FORTRAN to CUDA. There are no translators for FORTRAN to assembly code and from a technical perspective it is much more efficient to go from FORTRAN to assembly code. Source to source translators are NOT as efficient as compilation to assembly code.

HPCwire: You talk about Convey’s hybrid-core computer as being an application specific, low power node. What is the significance of this description to the market?

Wallach: In the past decade, every generation has added new, specific instructions to general purpose computers to speed performance. For example, the current x86 system enhances image processing and new instructions have been developed to enhance vector processing. Since clock cycles are basically flat, you will see the trend toward specific instructions built into the microprocessors increasing. If one instruction can replace 10 instructions, you will have reduced power for that application. Our view is that it is now time to step up and increase the functionality of this approach. We advocate having one instruction to replace 100 instructions. Now you don’t have to rely on clock cycles to increase performance. You are relying instead on data and control paths. This approach is extremely useful for Convey and allows us to significantly increase performance while reducing power requirements, footprint and overall facility costs for a data center.

HPCwire: In order to be successful, do you think new computing paradigms need to leverage existing eco-structures like Linux and Windows?

Wallach: Absolutely. As I said before, new languages mean higher costs and lower productivity. In VC deals, whenever I hear that you have to program in a new language to make it work, I turn it down.

With new computing paradigms, you get several benefits when they leverage existing eco-structures like Linux and Windows. First off, they are more easily acceptable in the marketplace. If I’m the data center manager, I don’t have to hire anyone new or have training for a new eco-structure. No need to program in OCCAM, for example. I call programs that don’t take into consideration legacy systems and that are obscenely difficult to integrate, “pornographic” programs — you can’t always describe them exactly, but you know them when you see them. In 1984, I converted a FORTRAN program from CDC to ANSI FORTRAN to see what they were doing and it was awful. In the contemporary world, CUDA is the new pornographic programming language.

In addition, Windows and Linux allow for adoption of related technologies from other industries without changing the programming environment. Industry innovators such as the researchers at Lawrence Berkeley National Laboratory believe, for example, that future supercomputers will use the processors found in cell phones and other hand-held devices. Why? Because they use so little energy and have proven that they can handle sophisticated tasks (October 2009, IEEE Spectrum: “Low-Power Supercomputers” ). It is easy for the manufacturers to build chips designed for specific HPC applications just like they build different chips for each Smartphone brand. Chip manufacturers will also provide the software — compilers, debuggers, profiling tools, even complete Linux operating systems — tailored to each specific chip they sell which will make the new systems easy to integrate into a current environment.

HPCwire: Last year in HPCwire you said the future of HPC involves improved software, in particular more widespread use of PGAS languages and optical interconnects. Is this still the case?

Wallach: Yes. I believe the need for optical interconnects increases as we build large systems. The efficiency of scaling in parallel processing has to do with bandwidth and latency. Optical interconnects are much more efficient in terms of speed and power as compared to copper. PGAS (partitioned global address space) languages allow programmers a global view of their dataset and are much more efficient. PGAS languages also make it much easier to program highly parallel systems — they are much better than MPI.

HPCwire: Speaking of software, where is Convey on its development of different software personalities?

Wallach: We are on track with our development of personalities. Convey’s personalities are application architectures and instruction sets that support a wide array of application-specific solutions. Rather than develop hundreds of unique applications, we a creating a manageable number of personalities that can be leveraged in hundreds of different ways. We’ve shipped a range of different personalities for different customers, and we’ve got several others in development.

In the end, we anticipate developing around a dozen different core personalities. This is consistent with what leading researchers have determined, also. For example, in the study published by the University of California at Berkeley, “The Landscape of Parallel Computing Research: A View from Berkeley,” researchers define what they call MOTIFs or computer application structures for HPC. They describe 13 computer application structures on the Y access with the X access representing a particular application and how it uses that structure. Berkeley’s view is consistent with ours that there are approximately a dozen different personalities that cover the full spectrum of computing. In our development, we add a third element to the equation — the memory system — and see this as a three-dimensional grid. For this case, a unity stride (access sequential elements — dense data); or a highly interleaved (access non-sequential elements — multiple independently accessible memory banks — sparse data); or a “smart” memory (PIM – perform specific operations in the memory system — thread based) system is required for optimal performance.

We are on track to have personalities with memory structure and instruction sets with these MOTIFs, which is where we believe computing is going. For the HC-1, we ultimately anticipate 13 MOTIFs — but some will use the same personality.

HPCwire: Convey has just started shipping production units, can you tell us about the company’s early customers and how they’re using the HC-1?

Wallach: Early applications for the HC-1 follow the classic profile of HPC applications: signal-image processing, computer simulations, bioinformatics, and other applications we can’t discuss at this time. We have HC-1s going into the world’s leading research labs, all of which we will talk about during SC09 at our booth.

You can catch up with Steve Wallach during SC09, where he is participating in a talk on “HPC Architectures: Future Technologies and Systems” from 1:30-2:00 p.m. on Thursday (Rm. E143-144); or at Convey’s booth (#2589).

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Dark Matter, Arrhythmia, Sustainability & More

February 28, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Microsoft Announces General Availability of AMD-backed Azure HBv2 Instances for HPC

February 27, 2020

Nearly seven months after they were first announced, Microsoft Azure’s HPC-targeted HBv2 virtual machines (VMs) based on AMD second-generation Epyc processors are ready for primetime. The new VMs, which Azure claims of Read more…

By Staff report

Sequoia Decommissioned, Making Room for El Capitan

February 27, 2020

After eight years of service, Sequoia has been felled. Once the most powerful publicly ranked supercomputer in the world, Sequoia – hosted by Lawrence Livermore National Laboratory (LLNL) – has been decommissioned to Read more…

By Oliver Peckham

Quantum Bits: Q-Ctrl, D-Wave Start News Flow on Eve of APS March Meeting

February 27, 2020

The annual trickle of quantum computing news during the lead-up to next week’s APS March Meeting 2020 has begun. Yesterday D-Wave introduced a significant upgrade to its quantum portal and tool suite, Leap2. Today quantum computing start-up Q-Ctrl announced the beta release of its ‘professional-grade’ tool Boulder Opal software... Read more…

By John Russell

Blue Waters Supercomputer Helps Tackle Pandemic Flu Simulations

February 26, 2020

While not the novel coronavirus that is now sweeping across the world, the 2009 H1N1 flu pandemic (pH1N1) infected up to 21 percent of the global population and killed over 200,000 people. Now, a team of researchers from Read more…

By Staff report

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

Micron Accelerator Bumps Up Memory Bandwidth

February 26, 2020

Deep learning accelerators based on chip architectures coupled with high-bandwidth memory are emerging to enable near real-time processing of machine learning algorithms. Memory chip specialist Micron Technology argues t Read more…

By George Leopold

Quantum Bits: Q-Ctrl, D-Wave Start News Flow on Eve of APS March Meeting

February 27, 2020

The annual trickle of quantum computing news during the lead-up to next week’s APS March Meeting 2020 has begun. Yesterday D-Wave introduced a significant upgrade to its quantum portal and tool suite, Leap2. Today quantum computing start-up Q-Ctrl announced the beta release of its ‘professional-grade’ tool Boulder Opal software... Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

NOAA Lays Out Aggressive New AI Strategy

February 24, 2020

Roughly coincident with last week’s announcement of a planned tripling of its compute capacity, the National Oceanic and Atmospheric Administration issued an Read more…

By John Russell

New Supercomputer Cooling Method Saves Half-Million Gallons of Water at Sandia National Laboratories

February 24, 2020

A new cooling method for supercomputer systems is picking up steam – literally. After saving millions of gallons of water at a National Renewable Energy Laboratory (NREL) datacenter, this innovative approach, called... Read more…

By Oliver Peckham

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

US to Triple Its Supercomputing Capacity for Weather and Climate with Two New Crays

February 20, 2020

The blizzard of news around the race for weather and climate supercomputing leadership continues. Just three days after the UK announced a £1.2 billion plan to build the world’s largest weather and climate supercomputer, the U.S. National Oceanic and Atmospheric Administration... Read more…

By Oliver Peckham

Japan’s AIST Benchmarks Intel Optane; Cites Benefit for HPC and AI

February 19, 2020

Last April Intel released its Optane Data Center Persistent Memory Module (DCPMM) – byte addressable nonvolatile memory – to increase main memory capacity a Read more…

By John Russell

UK Announces £1.2 Billion Weather and Climate Supercomputer

February 19, 2020

While the planet is heating up, so is the race for global leadership in weather and climate computing. In a bombshell announcement, the UK government revealed p Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Microsoft Azure Adds Graphcore’s IPU

November 15, 2019

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This