Back to the Future

By By Tom Gibbs, Contributing Author

April 2, 2007

You may not need to induce Dr. Emmett Brown’s “flux capacitor” and hop a ride with Marty McFly to take a ride “Back to the Future” because the computing portion of Grid 2.0 is going through a metamorphosis that is just beginning to take shape, and it’s starting to look very familiar. A great philosopher once said, “Those who cannot remember the past are condemned to repeat it.” We may not exactly be doomed to a stroll down the “green mile” in this case, but it does look like we’ll be reliving some trends from computing’s storied past.

Over the past year, I’ve been focused on the communications and consumer usage models that are emerging with the current phase of the approach to deliver information technology commonly called “grid.” My observation about a year ago tied an overall shift in the way consumers and business were acquiring and using software to the approach for developing and deploying IT infrastructure that was being developed by the grid community. The new wave of software-as-a-service that is born and delivered over the Internet was, and is, being called out in headlines as “Web 2.0” after a term coined by Tim O’ Reilly a few years ago. I termed the application of a grid-based infrastructure for the emerging Web 2.0 application development and delivery model “Grid 2.0.”

The emergence of Web 2.0 and the underlying Grid 2.0 infrastructure have been taking off faster than the user community of the virtual reality game “Second Life.” While this has been happening, I’ve started to observe that the traditional computing and scientific usage models that are the corner stone of Grid 1.0 may be going through a quieter but perhaps just as fundamental transformation. The current changes have a striking resemblance to events that unfolded 40 years ago in the initial stages of the computer industry. I’m beginning to conclude that we may be coming full circle with respect to meeting the needs of the technical computing user group. While not quite as sanguine as the late, great philosopher Santayana, the former great catcher and manager for the New York Yankees may have summed this up best when he observed, “It’s like déjà vu all over again.”

If we did decide to join Marty McFly — jumped into the DeLorean, popped the clutch, kicked Dr. Emmett Brown’s flux Capacitor into gear and zoomed back to the early 1990s — we could choose to land in Illinois and meet up with Doctors Ian Foster, Charlie Catlett and Carl Kesselman to get a look at the beginning of the concept they would name “grid.” We’d meet with some true visionaries who were developing an approach to architectural virtualization on a grand scale, and we’d see a scientific computing industry in a state of massive transition. The computer systems that were used by nearly 100 percent of community doing large-scale scientific research and engineering over the past decade were based on custom processors or used attached custom processors to perform the numerical calculations. The shift that was happening was from custom processors to commercial-off-the-shelf processors. The executives in the custom processor industry would come to refer to this epoch as “the attack of the killer micros,” which was a pejorative reference to the title of a cult classic B-movie with a famously absurd plot line.

In this case, however, the plot line of the trend wasn’t absurd at all. It was based on the empirical observation made by Dr. Gordon Moore of Intel in the late 1960s (known as Moore’s Law), where every 18 months a combination of economics and science would allow the number of transistors that can be cost-effectively manufactured in a single die of silicon to double. When applied to general-purpose microprocessors, this doubling in feature density resulted in a doubling in raw performance because the closer you could get the transistors together, the faster you could run the processor — with the speed of light being the governing factor. Whether you apply this rate of improvement to your retirement portfolio or processor performance — or anything else for that matter — the growth is exponential. Hindsight is 20/20. Looking back now, it’s clear that the custom processor designers couldn’t keep up.

There was another aspect of the continual improvement in general-purpose microprocessor performance that might have been more important than bragging rights in raw performance: Software developers could just ride the exponential performance curve. Why tweak code to gain performance advantages when you could just wait a year and the processor vendor would give it to you for free? The software developer had to do some level of parallel processing to take full advantage of the “killer micros,” but they were already doing that anyway to get more performance out of the custom processors. The end of the story was certain; it was only a matter of time.

A proxy for this transition is the Top 500 list of computer systems, which went from being dominated by custom processor-based systems in the early ‘90s to being almost completely dominated by systems based on general-purpose microprocessors in the last few years. Game over! But before we declare clear and present victory, let’s take a peek at how the game began, and we may see that it might be more appropriate to declare “Inning over!”

We could choose to rescue McFly from the killer micros, hop back in the DeLorean and go back to the early 1970s, not long after Dr. Moore derived his eponymous law. Once the smoke cleared, we could expect to hear the reverend Al Green belt out his No. 1 hit “Let’s Stay Together” on the radio. We’d find that the leading computer designers of the time — Gene Amdahl and Seymour Cray — were deciding they didn’t want to live the lyrics and were leaving their respective and respected employers — IBM and Control Data Corporation — and setting out to build their own unique systems.

Cray, in particular, was frustrated with the lack of innovation in floating point performance and in a few years would bring the Cray 1 to market. It should be noted that this breakthrough design came to market with no compiler or operating system at about the same time as Ted Hoff at Intel developed the first microprocessor for a calculator company. Neither of them probably thought about it then, but a race was on.

Cray and the other custom designers toiled long and hard to develop a solution to a basic problem of the time: Technical computing users needed far more computing power than the general-purpose computers let alone microprocessors could deliver. The gap in performance and related demand for additional computing power was large enough to support multiple vendors and approaches with single-system prices in the range of $5 million to $10 million, which, if adjusted for inflation, translate to roughly $50 million to $100 million in current dollars.

The late ‘70s saw a wide variety of approaches to the basic performance problem. Some designers like Cray developed fully integrated systems. IBM developed a separate “vector” unit that could be plugged into one of its general-purpose mainframes, and Seymour Cray’s former employer finally bootstrapped itself out of financial trouble and brought its Cyber series of products to market. Other companies like Floating Point Systems developed array processors that plugged into an I/O slot on a general-purpose system such as IBM mainframes or Digital Equipment Corporation minicomputers.

If we kicked the DeLorean into gear and hit the brakes in the early 1980s, with longitude and latitude in northern New York state, we might be listening to Joe Cocker’s No.1 hit “Up Where We Belong,” which starts with the lyrics “Who knows what tomorrow brings?” If we trucked on over to Cornell University, we could meet with Dr. Ken Wilson, who had at least part of the answer. Wilson would win Nobel Prize in Physics in 1982, which was the first time the prize was awarded for theory supported entirely by computer simulation.

He would then become the head of the Cornell Theory Center, a facility comprised of a parallel configuration that combined IBM mainframes with vector facilities and array processors from Floating Point Systems, some of which had additional custom accelerators for certain matrix algebra calculations. The fact that some of the custom processors had additional accelerator cards that would speed up specific functions like matrix algebra was an indication of the innovation required to achieve speed. And you thought the flux capacitor under the hood of the DeLorean was a wild design worthy of Rube Goldberg. In the mid-’80s, these were the lengths to which one went in the name of computer-assisted science.

Oh, it was a heady time for computer hardware architects — which resulted in a giant headache for software developers. Each of the fully integrated systems, like the Cray and Control Data Cyber systems, had their own operating systems and compilers and libraries, which were always afterthoughts of the original design teams. The array processors were notoriously tricky to program, with unique library functions and thorny overhead issues as you moved on and off the I/O bus and dealt with different data formats. Because there was no common architecture for tools developers to build to, the state of the industry for software was clumsy at best. Yuck. Let’s get outta here!

If we hopped back into the DeLorean, and got off in mid-2001, we’d find most of the focus in the computer industry was on deflated stock valuation and excess inventories. It would be easy to relate to Usher’s No. 1 hit “U Got it Bad,” where the lyrics “Everything that used to matter don’t matter no more” would ring true across the entire computing industry. In the midst of this economic fog there was a quiet but growing concern from the dedicated scientists developing future microprocessor products, but the oft-feared end to Moore’s law wasn’t the issue; the scientific brain trust concluded we could drive higher transistor densities for at least another 10 to 15 years. Unfortunately, there was a more immediate issue that would be summed up by the theories of Dr. James Clerk Maxwell.

Unlike Dr. Emmett Brown, Dr. Maxwell was a real scientist, and unlike Moore’s Law, Dr. Maxwell’s equations and relations are theory based on mathematics immutable by time. At some point in time, however, Moore’s Law will cease to be a law. He and others in the field of semiconductor design are confronted with the reality that the laws of electromagnetism and thermodynamics won’t come to an end anytime soon. The processor designers were forecasting that they could cram more transistors closer together based on Moore’s Law and run them at higher frequencies just like they had been doing for the last 25 years and the processors would continue to go faster, but they’d also be about as hot as a star. And not a big star like Britney Spears, but a big star like the Sun.

The early warning signals came from power users on Wall Street, who were seeing their utility bills go through the roof and were starting to hit the megawatt limits in their respective computer rooms. Then the large Web-based service providers like Google started to put their servers in every nook and cranny with a power outlet. We’re talking about a literal garage shop! More recently, Google, Microsoft and Yahoo started buying plots of land on the Columbia River to take advantage of the power and potential cooling capability.

There was one obvious solution: Use the improved feature size to develop multiple processors on a single die process that would not run at such high frequencies but would deliver increased performance by running in parallel. The concept was given the name multi-core and the result would be as fantastic as light beer: it would taste about the same and have fewer calories. However, it didn’t take a connoisseur to uncover the nasty aftertaste, and the free ride for software was about to end.

Now, this didn’t mean that software developers would start checking in and out of rehab and shave their heads, but at a minimum they’d have to recompile their applications. This sometimes was required with generational changes in microprocessors, so it was OK at first blush. Unfortunately, this would only result in performance improvement for a relatively small set of applications. To apply more broadly, applications would need to be modified by threading or some other form of parallel control to run on the multiple cores simultaneously. In some cases, the applications didn’t have the right structure to benefit from parallel execution, and the net overall result was that even with some changes to the code bases, applications were not going to see exponential improvements in performance from Moore’s Law in the future. The industry was about to collectively sing “Oops, I Did It Again,” as they were going to come fast and furious to the brick wall of performance described by Amdahl’s Law.

Like Maxwell’s relations, Amdahl’s Law is an algorithm, and it won’t end anytime soon. In simple terms, it says that if you try and speed up an application with special processing, the amount of speed up is limited by the fraction of time the application spends executing in the special processor. For example, let’s say you were playing an Internet game and half the time (let’s say 50 milliseconds) the application was accessing the Internet and the other part of the time it was executing in a special gaming processor that was 10 times faster than the general-purpose processor. The performance you’d see would be 55 milliseconds. The theory says that even if the processor is infinitely faster, the speed up is, at best, 2x.

Well, it’s time to get out of the hunk of stainless steel that defined the look of the maligned DeLorean — whose parent company had its start in the same year the Cray 1 shipped its first system — with our feet firmly planted in the present to observe the that more things change, the more they stay the same. The leading purveyors of processors for PCs and servers have committed to multi-core processors. Custom processors in multiple forms, from IBM’s Cell to Graphics Processing Units to Application Specific Integrated Circuits and Field Programmable Gate Arrays, are all being announced and improved at a feverish clip. AMD just announced Torrenza, which will allow custom processors to be plugged directly into the hyper-transport interconnect. We can expect Intel to make a similar move soon to allow direct connection of custom CPUs in their platforms.

All of these innovations are workarounds to deliver continued improvements in computing power. They will each benefit from Moore’s Law for the next few years and see improvements in raw speed. They, and the user community, also will be subject to Amdahl’s Law and will drive software developers to try and achieve a reasonable fraction of the potential raw performance. All of this effort will be an attempt to keep performance improving year-over-year in the face of Maxwell’s relations.

What does all this mean for the grid community? I think the focus of provisioning computing assets for a computing grid will need to be extended from aggregation and utilization of multiple general-purpose processors to finding the computing facility on the network with the processing appropriate for the given workload. Workload mapping already is included in the provisioning model, and some users, such as Steve Yatko from Credit Suisse First Boston, have been very vocal in the need to pursue this aspect of grid for some time. With workload mapping, grid-provisioning software determines what kind of computing is required by the application based on data types and other context provided by the developer and/or analyzed by the provisioning software automatically, and then maps it to the right hardware on the network.

At the most basic level, the need to write applications in a way that is modular (service-oriented architecture) and allows each module to share multiple computing assets and run on the most efficient computing, network and storage devices (service-oriented infrastructure) will shift in focus from nice- to-have to mandatory. The grid will become more important than ever as the free lunch with serial performance gradually slows down and eventually comes to an end.

About Tom Gibbs

Tom Gibbs is managing partner at Vx Ventures, a global consulting and investment partnership that focuses on the application of new IT architectures, such as grid computing, service-oriented architecure, RFID and sensor networks, to help communities and companies accelerate economic growth and improve the social well-being of their employees and citizens. Prior to Vx Ventures, Tom was the director of worldwide strategy and planning in the solutions market development group at the Intel Corporation, where he was responsible for developing global industry marketing strategies and building cooperative market development and marketing campaigns with Intel’s partners worldwide. He is a graduate in electrical engineering from California Polytechnic University in San Luis Obispo and was a member of the graduate fellowship program at Hughes Aircraft Company, where his areas of study included non-linear control systems, artificial intelligence and stochastic processes. He also previously served on the President’s Information Technology Advisory Council for open source computing.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Hyperion: AI-driven HPC Industry Continues to Push Growth Projections

November 21, 2019

Three major forces – AI, cloud and exascale – are combining to raise the HPC industry to heights exceeding expectations. According to market study results released this week by Hyperion Research at SC19 in Denver, Read more…

By Doug Black

At SC19: Bespoke Supercomputing for Climate and Weather

November 20, 2019

Weather and climate applications are some of the most important uses of HPC – a good model can save lives, as well as billions of dollars. But many weather and climate models struggle to run efficiently in their HPC en Read more…

By Oliver Peckham

Microsoft, Nvidia Launch Cloud HPC Service

November 20, 2019

Nvidia and Microsoft have joined forces to offer a cloud HPC capability based on the GPU vendor’s V100 Tensor Core chips linked via an InfiniBand network scaling up to 800 graphics processors. The partners announced Read more…

By George Leopold

Hazra Retiring from Intel Data Center Group, Successor Not Known

November 20, 2019

Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring after more than 24 years at the company. At this writing, his successor is unknown. An earlier story on... Read more…

By Doug Black

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU-accelerated computing. In recent years, AI has joined the s Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

SC19 Student Cluster Competition: Know Your Teams

November 19, 2019

I’m typing this live from Denver, the location of the 2019 Student Cluster Competition… and, oh yeah, the annual SC conference too. The attendance this year should be north of 13,000 people, with the majority attende Read more…

By Dan Olds

Hyperion: AI-driven HPC Industry Continues to Push Growth Projections

November 21, 2019

Three major forces – AI, cloud and exascale – are combining to raise the HPC industry to heights exceeding expectations. According to market study results r Read more…

By Doug Black

At SC19: Bespoke Supercomputing for Climate and Weather

November 20, 2019

Weather and climate applications are some of the most important uses of HPC – a good model can save lives, as well as billions of dollars. But many weather an Read more…

By Oliver Peckham

Hazra Retiring from Intel Data Center Group, Successor Not Known

November 20, 2019

Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring after more than 24 years at the company. At this writing, his successor is unknown. An earlier story on... Read more…

By Doug Black

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX Read more…

By Doug Black

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This