Performance, Power and the Platform

By Nicole Hemsoth

February 17, 2006

Intel fellow and director of microprocessor research Shekhar Borkar recently sat down to discuss how Intel researchers plan to achieve new levels of microprocessor performance and power efficiency. In this article, Borkar explains how Intel researchers are confident about achieving a 10x improvement in MIPS (millions of instructions per second) per watt, and why Intel's platform approach is vital to that effort.

What is Intel's direction in microprocessor architecture over the next five to ten years?

We are continuing toward the vision of multi-core processors and platforms that we articulated a few years ago — what we called the “right-hand turn.” We started with multithreading that utilizes today's single-core hardware more efficiently. Now we are moving on to architectures where we use multiple processor cores on the die to provide higher and higher performance, as opposed to past trends of increasingly larger monolithic cores.

What makes Intel's platform approach different from other multiprocessor architectures, in terms of both processors and broader system-level technologies?

In our multi-core platforms, we have integrated a whole set of technologies to give the user the full user experience that he or she requires. Some of these are what we call the “*Ts” — a set of new technologies aimed at providing users with greater value and functionality in the platform. Our vision is that every technology in the platform should work together in harmony to give the final experience: a platform that is powerful, versatile, secure and reliable, affordable to own, and simple to manage.

Now a performance question: during the Fall '05 Intel Development Forum (IDF) Paul Otellini mentioned a 10x reduction in power and a 10x improvement in performance. Is this vision achievable, and if so, how do you intend to accomplish it?

It is absolutely achievable. In fact, Paul is articulating goals we identified five years ago, and we have already shown through our research how we will attain these goals. Now Paul is challenging us to make it a reality.

Here are the ingredients we need to get there. The first is to move away from using frequency alone to deliver performance, similar to what we are doing in Intel Centrino mobile technology. With Centrino, we don't talk frequency, but we do talk performance. We are delivering performance with the platform integration, without increasing power.

The second ingredient is multiplicity. Today we have multithreading and chip-level multiprocessing. Future processors will have multiple cores on the die. With the multi-core approach, you can increase your power and performance linearly, as opposed to the quadratic relationship between power and performance for larger monolithic processors. Therefore, multiple small cores have the potential to provide near-linear increases in performance with only linear increases in power (as opposed to quadratic increases in power with a large core).

But does this mean that power will increase with multiple cores? No. When we apply multiple processors to a problem, we can use that quadratic relationship between performance and power to our advantage. For example, a 15 percent drop in per-core performance can give us a 50 percent decrease in power usage. So in the future, we can double the number of processor cores on a die using processors that each have 15 percent lower performance than a larger monolithic processor, but we still greatly increase the overall processor performance, and we have cut power usage by 50 percent. So yes, I can get more performance while reducing power.

The third ingredient of the power/performance solution is what we call fixed function hardware. Using specialized hardware for specialized functions, you can get a lot more performance for the power. Let me give you an example. If you want to run some of the repetitive or enabling tasks like video processing, speech recognition, or network processing, what do we do today? We use general-purpose processors. These general-purpose processors are flexible. They can do anything, but at the expense of power. But in the future, because of Moore's Law, we can get a lot more transistors to design with. Why don't we take a budget of, say, 50 billion transistors and dedicate a few billion here and there? I can use those for fixed-function hardware. So there will be a block that does network processing here for you, here's a block that does DVD for you, and here's a block that does speech recognition. These are all fixed-function processors. All they do is the task assigned to them. They are not flexible, but they have the potential to give you very high performance at low power. And that is where active research is focused today.

So in the future, by integrating these three ingredients, a 10x increase in MIPS/watt is definitely achievable.

Do you anticipate any technology roadblocks in achieving this vision of improving performance/watt by 10x?

There are always challenges, what you call roadblocks. If there aren't any roadblocks, we don't have a job. So we have to do research to either circumvent them or move them. One roadblock that we saw 10 years ago was power, and you can see what we have done to solve the issues of power. In the future, one major challenge that we see is a reliability issue related to the circuitry.

Now by reliability, I don't mean the kind of reliability that you think about in the everyday world. Think about a transistor. When you build a transistor today, it functions for the normal lifetime of a transistor, which is about 7-10 years. In the future, as transistors become smaller and smaller, some percentage of individual transistors will stop performing to specification or they won't perform to spec for the normal period of time. A certain number will become what we call “aging” transistors. This is a hot research topic for us and for other researchers: how to design circuit architectures to circumvent the aging of the transistor.

Another example is what is called soft errors. Cosmic rays (neutrons) from outer space striking circuitry do not create any permanent damage, but they can corrupt data, hence called soft errors, as opposed to hard. As transistors become smaller, and transistor density increases, circuitry will be more prone to soft errors. We see these errors even today, but the magnitude of these errors could be much worse in the future.

These are the sorts of challenges for which we are forging ahead with full confidence and are finding solutions through research.

Intel is moving toward multi-core architectures with two, four, and eight-plus cores. How will these new multi-core architectures help manage power, and at the same time drastically improve performance?

In fact, multi-core architecture helps tremendously in managing power. If we look at the processors today, they are active for a short period of time. When you press a button on the keyboard, the processor becomes active, consumes 4-6 watts of power just for a few milliseconds while you are pressing the keys, and then goes to sleep. Today, we call this fine-grain power management. In the future, with multi-cores, we can do even finer levels of power management. Given something like eight cores, if you need only one core to do your task or to run your application, you activate only that one core, the right core. As a result, we save considerable power and reduce costs.

How will the next generation of multi-core help ensure power efficiency and improve Intel microarchitecture capabilities?

There are a few things it can do. One is more fine-grained power management, as I mentioned earlier. The second is that it can give a performance boost. Whenever the applications need maximum performance, for a very short time you have all the performance of all the cores. It's just like the turbo boost in your car. You don't run the turbo all the time when you are cruising on a highway. It's only when you want to pass another car that you use the turbo boost. It's the same thing with computing. If you need a turbo boost in your application, all the processors awaken for one millisecond, give you the performance, and then shut off. Only one processor is active. You get the best of both worlds: both high performance and low power.

With multi-cores, when you talk about fine-grain control, that assumes those cores are smaller and consume less power than today's monolithic cores?

Not necessarily. That is an active research topic today, because small is relative. How small is small? Is it Centrino small? Is it Pentium small? Is it Pentium II small, or is it Pentium 4 small? That is an active research topic: what core size do I need to get the highest performance and the lowest power within the power envelope?

How does Intel's “platformization” approach figure into the power/performance equation?

All the questions that you've asked show that devising a solution just for a microarchitecture or a circuit or software is not enough. What users need is a platform. To create a platform, you have the implementation of the microarchitecture using circuits. Those circuit designs become hardware products through manufacturing and process technology. Then this hardware technology becomes usable via the software technology.

We can't design hardware or software in a vacuum, like we did in the past. No more “I can make a really fast, super ‘C' compiler that will blow the socks off of anyone.” No, no, no. What I need to do now is to build our C compiler with a specific platform in mind. I'm going to write my kernels today to support my hardware in 2010. Then we get a usable platform that has fine-grain power management designed in: microarchitecture that will support it, circuits to implement it, and software that utilizes it.

——-

Shekhar Y. Borkar is an Intel fellow in the Corporate Technology Group and director of microprocessor research. Borkar is responsible for directing research in low-power circuits and high-speed signaling for Intel's future microprocessors. Borkar joined Intel in 1981. He worked on the design of the 8051 family of microcontrollers, the iWarp multi-computer and high-speed signaling technology for Intel supercomputers. Borkar is an adjunct faculty member of the Oregon Graduate Institute. He has published 10 articles and holds numerous patents. Borkar was born in Mumbai, India. He received a master's degree in electrical engineering from the University of Notre Dame in 1981, and masters and bachelors degrees in physics from the University of Bombay in 1979. To see a list of his patents and publications visit the Intel Web site.

Copyright (c) Intel Corporation 2006. All rights reserved. Reproduced by HPCwire with permission. This article was originally published in Intel's Technology@Intel Magazine.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire