Discussing the Many-Core Future

By Paul Tulloch

March 23, 2007

With the onset of multi-core processor technology, it has been widely suggested that we are at a turning point in the history and future of computation. Quite simply, we can no longer squeeze more computational capacity out of single-core technology given the current impasse of increasing the speed of individual cores. We have hit the wall, as the saying goes.

With the objective of continuing on the quest for a Moore's law type of computational speed-up, the hardware industry has introduced multi-core technology. This will, as recently demonstrated by Intel with their future-oriented 80-core chip, lead from a multi-core to a many-core future. Potentially down the road, assuming a continued trajectory, this could lead to the development of a massive core future whereby one chip could contain thousands of processing cores.

With this sea change in the architecture of the hardware, we are witnessing the software community wrestling with a massive shift from serial-based thinking to parallelism. From a cursory reading of recent articles on the topic, the prevailing view affixes a quite blunt and negative reaction to this grandiose trajectory that the hardware community has set in motion.

Some within the software community have gone so far as to suggest that with the recent advances in hardware, they have been thrown the “hail Mary of all passes” in the quest to meet the economically institutionalized expectations for computational speed up. Some have suggested that the challenges of parallelism bestowed onto the software industry will have programmers looking into the abyss, and ultimately the whole model, which has served as the far reaching engine of technological innovation, could come to a crashing halt.

With this type of feedback coming from the software community, could it be that we are witnessing the end of an era? Could it be that in ten years from now, when I fire up my word processor, that what I currently have in front of me will be as good as it gets? In many ways, the software community is presenting the case for such an argument.

The stakes are quite high and are economically far reaching. The increasing speed up and the promise of more and faster technology, combined with some large portions of institutionalized marketing, propelled a once small industry dedicated to the scientific community into the heights that brought about astonishing change, in such a way that it is typically compared to the industrial revolution. It is a prime mover and leveler of almost everything existing. And that which it did not destroy and rebuild anew, was fundamentally altered.

Existing single-core technology and the innovation of faster and better were the shoulders that much stood upon. If the software challenges of parallelism prove to be the insurmountable obstacles that some within the software community have alarmingly claimed, then we may need a whole rethinking of the innovation process. Marketing may keep the process going for awhile yet, but it can only take it so far, before finally the emperor's clothes are revealed for what they are not.

I will suggest that it is precisely this mindset that has been prevalent within the culture of technology decision making, and is quite the contrary of how we should perceive this move from a serial present to a parallel future. In a Schumpeterian creative destruction type fashion, I would suggest that instead of bemoaning the death of single-core technology and the limits of serial computational speed-up, we should be celebrating their death. We should instead realize given the change to many-core technology, we are finally going to have the ability to feasibly put low cost high performance computing on the desktop. It is a time to realize that we have finally established a beach head on the shores of real computational power. Finally the visions of many long ago dreams of what the computer could be are within reach. Finally an end to minimalist thinking and an end to a hardware architecture that has outlived its usefulness.

So how could such a viewpoint hold and is it rationally based given the contrast to the grim mindset that is beginning to beset the software industry faced with such paradigm shifts? Well, given the opportunity, I will attempt to paint with broad brush strokes the logic of such thinking. First off, one has to firmly establish what is coming down the pipe from the hardware industry and the architectural changes that we will most likely see in the clear and present future. Like anything else within the realm of innovation and predictions of the future, it is subject to much speculation and shrouded in secrecy. However it is becoming somewhat clearer of late.

Intel recently displayed its currently-in-the-lab, future-oriented prototype 80 core chip with a stated computation capability of one teraflops. Thus revealing that not only is the current multi-core architecture not a passing fad, but legitimate plans are in place for the development of a many-core future. The new architecture will not just be an amazing array of many-core processing nodes but will come with a refreshing and imperative twist.

With a not-to-be-out-done response to Intel's look into the future, AMD recently unveiled its latest R600 GPU technology. The said demonstration system running two R600 GPUs in crossfire mode, labeled a “teraflop in a box,” provides for some soon-to-be-released computational speed-up, clocking in at the stated threshold of one teraflops. If we look into the prospectus of AMD with its stated intention of combining the functionality of both the CPU and the GPU into a single chip, we see a future that provides for a heterogeneous many-core architecture.

It has been quite clearly demonstrated through general purpose computation on GPUs by various groups, such as the GPGPU.org community, that the architecture within the GPU excels at many computationally intensive applications. It has been shown that it achieves a speedup of anywhere from 5 times to, in some documented cases, 70 times the speed of a CPU, measured in flops.

The GPU has struggled since its inception to be taken seriously as anything but an add on, accelerator type extension to the CPU. However in its “grand finale” as a discrete entity, it has, through innovation based on the gaming and graphics community's needs, and rather large wallets, eclipsed its master in abilities to deliver computational speed. Albeit, the original architecture was originally designed with a quite specific and limited goal. However, it has evolved and now a more generalized computational ability based upon data parallelism is being earmarked for this architecture. The joining at the hip of these two architectures will open up a whole new realm of possibilities.

In rather general terms, based upon these two simple but quite revealing demonstrations, the future landscape of hardware architecture will be a blend of specialized CPU and GPU type cores. Programmers will have handfuls and handfuls of threads dangled before them from an array of specialized cores to deliver their craft upon. It will offer up latent computational speed many times over what is available under the current single-core design. But getting back to our debate, how will software developers make use of this multitude of threads supplied by specialized cores in a manner that keeps the quest for faster and better circulating? What will the future of the software industry look like? More importantly, will the cycle for continual and somewhat staged releases of technology keep the consumer wanting to buy new and better chips, and new and better software? Will the pump remain primed or will the well run dry?

The dynamic as so rightly denoted by the software industry is no longer centered on how fast a single core can run a serial process, but how efficiently a program can make use of the available cores and threads. However, upon examining the software community's noted negative response, I would suggest it could be deemed a bit of a knee jerk reaction as it unjustly focuses criticism on the most difficult of future challenges that parallelism presents, while seemingly discounting a whole new array of possibilities. That is they have confined their focus on how efficient these cores can be programmed from a traditional task parallelism stand point. I say unjustly, as it is precisely the asymmetrical distribution of how these cores and threads will be unleashed and applied to a whole new set of programming challenges that is the key to realizing and envisioning this new dynamism and its potential.

We will see a multitude of programming strategies and methodologies that develop within the cultural space of software design. The distribution of outputs and solutions will run the gambit from hard core task parallelism, to data parallelism, to a hybrid or light parallelism and right back to the traditional serial. (Recall that not every algorithm can be parallel programmed.) Some focus on the difficulties and complexities of task parallelism is warranted and will be important for advancing computational speed up, but it is a limited perspective. The problems with hard core task parallelism are well documented, however given the leverage that a mass market offers, the traditional nuances of these programmatic challenges will be subjected to new market forces away from the confines of academia and highly specialized scientific realms. Potentially opening up the field to creative solutions that traditionally only a mass market can deliver.

With respect to other forms of parallelizing, data parallel initiatives would seem to offer the most immediate speed up in terms of intensive parallel innovations. As indicated, the general purpose GPU community have made some quite impressive inroads into this new territory. Stream computing on a GPU has matured to a point that commercial applications of such technology are becoming commonplace. And lastly, new forms of lightly parallelized applications are most likely the initiatives that will be introduced to the market place on a wide scale. These applications will be made up of helper or add on enhancements to many mainstream applications such as word processing, databases, spreadsheets and an assortment of others. They will come in the form of voice recognition, search facilities, translation services, hand writing recognition, and a slew of others. I label them lightly parallel, as they will be application independent, but will have some limited memory sharing, and concurrency dimensions.

On the mass market front, without even considering the possibilities of parallelism, we will see serially-based multitasking reach new heights that spring board the slow advance of the desktop computer to the center of the household as users are allowed to concurrently load up their computer with a myriad of multi-media, entertainment, communication and home monitoring software. On the work front we shall see information workers, who are currently drowning in an information and data torrent, finally equipped with the appropriate tools to initiate a regime of control and analysis required to informate and extract the knowledge that has been elusive since the early stages of the developing knowledge economy. Extreme multitasking will also open up the space for virtualized workstations, i.e., assigning cores from a central server to each workstation, thus allowing for significant cost savings from both a hardware and systems maintenance perspective.

As indicated above, the many-core future has much to offer and additionally will open up and bring into the mass markets some key software fields that due to computational requirements have had limited exposure. For example the potential of AI, machine learning, data mining and statistical decision making have only been touched upon under the guise of single-core technology. The fields of opportunity are abundant and ripe. The shift to parallelism will allow a harvesting that when we finally get to the other side we will realize was our destiny.

It will take some fundamental change in perspective to realize this potential. Perhaps what is at the heart of the software community's concerns is a much needed rethinking of the functionality between the hardware and software industries. The fates of both industries are no longer mutually exclusive. They can no longer operate in their relatively independent states. This is ultimately the reason we should be celebrating the death of single-core technology. The fates of both industries are now tied together in a bond that will be the key to their mutual growth and success — more than ever.

The days when the hardware industry simply designed and produced with the goal of faster and better are over. The equation for successful innovation given this new parallel future has upped the ante with further complexity. With this, I would conclude that rather than focusing on the short sighted hard core task parallelism that seems to have arisen recently, a rethinking of the very nature of the relationship between the hardware and software industries is required. One may immediately conjure up visions of a “Wintel” type of argument in the making. However, the critical mass of this rethinking is targeting something far deeper than a vendor relationship. The aim is to establish a more organic relationship between these two institutions. It may seem like a quite ethereal objective, but given the dimensions of the responses so far to parallelism, it leaves one wondering how these two trajectories could advance this far without more of a constructive and shared vision.

—–

About the Author

For over 14 years, Paul Tulloch served as a senior economist/data analyst and builder of quite a number of survey and administrative data processing systems for Canada's National Statistical Agency, Statistics Canada. He has also been seen from time to time working as an independent consultant specializing in statistical analysis, data mining and machine learning. He has for several years focused on studying, researching, and quantifying through both his work and formal study at the graduate level the subjects of innovation, work and the emerging knowledge-based economy.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire