HPC Progress Starting from 10X

By Bill Sembrat

December 4, 2013

I was fortunate to have worked very closely with Seymour Cray for many years in many different roles and capacities. I started working with Seymour and Seymour’s machines at Control Data Corp. I was the Account Manager at Lawrence Livermore National Lab when Sid Fernbach was the “leader” and then the Account Manager at DOE. Although I took it for granted at the time, I was in “graduate school” between the world’s leading designer and the world’s leading user. I sometimes think about that very special set of circumstances and didn’t realize how very special it was taking it all for granted and thinking that it was the way all high tech companies and users worked. Before Seymour died I also worked with him at Cray Computer Corp and then at SRC Computers. I left SRC Computers after Seymour died.

Now, reflecting on the past and looking towards the future let’s think about performance increases and how to get some significant performance increases. A reasonable starting goal is to look for performance increase of 10X with a pathway to get, at least, another 10X.

Consider the problems and issues. It’s hard to add more racks and because of power, heat and other considerations, we have to look at other areas. It would be nice if we can just go to the most fundamental part and get transistors to switch 10X faster. Faster electron transmission would be nice too. It would be easy if we could reach in and turn the dials up ignoring, for the moment, heat dissipation and transmission delays. This is why there is tremendous effort being spent on speeding up transistors. Well, let’s consider changing to gallium arsenide transistors. Seymour changed from silicon to gallium arsenide for speed and some other characteristics including reduced power requirements and a few others, but even then with gallium arsenide we will still face limits. A lot of labs around the world are working on faster transistors including a silicon-germanium composite, so there is some hope for faster devices. As I understand, these improvements are around 2X-4X or so, but even 2X would be great. Well, we are reaching limits (and can’t just get around some physical limitations) so this approach is not going to get us very far down the road, and herein lies the most significant issue.

In search of speed in the late 1950’s Seymour changed from germanium transistors to new transistors from a new start up company in California, Fairchild Semiconductor, “Planar Silicon Transistors”. Of course, that was before the label of “Silicon Valley” existed. Seymour may have been the first, if not one of the first to use silicon transistors for HPC, also well before the label of “HPC”. This was for the Control Data Corp 6600. The 6600 was a revolutionary machine that also greatly expanded the existing computer model addressing real code and real workload issues. If you take a “Big Picture” look at the 6600 you would see that it exploited the “RAM” model foretelling the future. We have all been on a pathway set by a model, which was already, somewhat, fully exploited with the 6600. The model hasn’t really changed and really all we have been doing over the last several decades is riding the coattails of technology improvements, tweaking the model, improving it here and there, and adding parallelism. In search of faster serial speed for the Cray 3 and Cray 4 Seymour was again to change, this time, from silicon to gallium arsenide transistors.

We cannot expect technology to get us large improvements, therefore, we have to address fundamental model changes on how we process codes and work loads. The guys in the farmhouse in Princeton thought they were in “fat city” when they came up with the idea of using CRTs as random access memory (RAM). As I remember each CRT was 40 words by 40 bits. They had a few CRTs, which was, at the time, all of the random access memory that existed on the planet. They were losing bits until they discovered that sunlight coming in the windows hit the CRTs resulting in dropping bits. They had to cover up all the windows. I would like to know what they were thinking and considering as options besides RAM and CRTs, but did not use and why. They were free thinkers and unencumbered by RAM or even users at the time. Over the last 50 years many root models changes were considered so it is important to understand the history, circumstances and compromises made at the time.

Even if you can come up with a plan to get to a 10X improvement (and if we just assume that we can) you arrive with a lot of problems to be overcome. Here we run in into that issue that seems to always come up with any speed improvement, memory, but you also come face-to-face with just moving stuff around and the limitations of even using wire. And one always needs to keep in mind power and transmission issues. We find several technology brick walls coming at us at one time. So, this technology driven path does not seem to be easy, quick or may not be cost effective. That said we would still welcome (and use!) any technology improvements.

Since improvements in technology will only get us so far, I am suggesting, just as Seymour was driven, to look at root level model changes; this may be the only way to see large improvements of 10X, and a path to another 10X or more. Seymour was always pushing speed and many may be surprised to know that follow on machines to the Cray 4 were quite different but also technology driven as usual. They follow further attempts to include additional parallelism in an electrical structure without abandoning the serial structure of computer programs and adding in features that became possible because of advances in technology. Seymour was a “free thinker” always considering and thinking about root model changes that would become necessary. Root level model changes are more easily considered and understood if you consider both the details and big picture coupled with broad based historical knowledge.

If we can start with a blank sheet, it is always good to keep in mind that there is a great need to reduce power and the easy way is to just make everything simpler and eliminate or reduce parts. It’s time to also go back to the very source and reconsider just how users are using machines and what they are trying to accomplish. In other words go back to look not only how real codes load the machine, but how and what they are trying to accomplish. Then we need to go back address different new and faster models. I really don’t think we in the computer business have been good vendors to our users. We have been forcing users to become computer experts just to use our machines. Users are just using a “tool” to get there work done and really don’t care about all this “technology” that we force them to understand in order to use computers. And the complexities are only increasing, with various types of parallelism, cache levels, threads, threadblocks, etc. Seymour always looked at applying his “gift” to give other people a better, faster, simpler, easier to use “tool” to better understand the world around us.

Seymour would sometimes get tired of my continued questions about what else he was thinking about and why he didn’t use it or go in a different direction. Given the right circumstances, Seymour was disarmingly straightforward. At the right time Seymour even welcomed a discussion because, I think, it gave him a way to talk about what he was thinking; it was part of his discovery process when he came to difficult questions or a roadblock. I found I learned much more from what was thrown out and the process to the answer, especially when most answers seemed quite simple – it’s the “Why didn’t I think of that?” moment. The real question becomes not the answer, but rather, if it’s so simple, why didn’t I think of that? You may quickly find that it was really not that simple or you were not asking the right question. Understanding the answer, going to the root and also understanding the history is always much better and far richer if you can and are able to understand all the “whys.” Sometimes, we are all too ready to just look for the “answer.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Launches Commercial Quantum Network with Samsung, ORNL

December 14, 2017

In the race to commercialize quantum computing, IBM is one of several companies leading the pack. Today, IBM announced it had signed JPMorgan Chase, Daimler AG, Samsung and a number of other corporations to its IBM Q Net Read more…

By Tiffany Trader

TACC Researchers Test AI Traffic Monitoring Tool in Austin

December 13, 2017

Traffic jams and mishaps are often painful and sometimes dangerous facts of life. At this week’s IEEE International Conference on Big Data being held in Boston, researchers from TACC and colleagues will present a new Read more…

By HPCwire Staff

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in what has become an overwhelmingly two-socket landscape in the d Read more…

By John Russell

HPE Extreme Performance Solutions

Explore the Origins of Space with COSMOS and Memory-Driven Computing

From the formation of black holes to the origins of space, data is the key to unlocking the secrets of the early universe. Read more…

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as several tech giants jockey to establish a pole position in the race toward commercialization of quantum. This week, Microsoft took the next step in Read more…

By Tiffany Trader

IBM Launches Commercial Quantum Network with Samsung, ORNL

December 14, 2017

In the race to commercialize quantum computing, IBM is one of several companies leading the pack. Today, IBM announced it had signed JPMorgan Chase, Daimler AG, Read more…

By Tiffany Trader

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as several tech giants jockey to establish a pole position in the race toward commercializ Read more…

By Tiffany Trader

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be care Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Microsoft Spins Cycle Computing into Core Azure Product

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization Read more…

By John Russell

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPE In-Memory Platform Comes to COSMOS

November 30, 2017

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This