Intel Parts the Curtains on Xeon Phi… A Little Bit

By Michael Feldman

August 28, 2012

As Intel’s Xeon Phi processor family gets ready to debut later this year, the chipmaker continues to reveal some of the details of its first manycore offering.  Although the company isn’t yet ready to talk speeds and feeds, this week they did divulge some of their design decisions that they believe will make the Xeon Phi coprocessor shine as an HPC accelerator. The new revelations were presented on Tuesday at the IEEE-sponsored Hot Chips conference in Cupertino, California.

The Hot Chips presentation was manned by George Chrysos, chief architect of “Knights Corner,” the code name for Intel’s first Xeon Phi product.  This fall, new chip is scheduled to debut in supercomputers, most notably the 10-petaflop “Stampede” system at the Texas Advanced Computing Center.  Although Knights Corner silicon will act as a coprocessors to the Xeon CPUs in that system, they will represents 80 percent of the total flops.

HPCwire talked with Chrysos (before Hot Chips) to get a preview of what he’d be talking about.  In essence, Intel is divulging some of the architectural details of the core and interconnect design, but is not releasing core counts, processor frequency, or memory bandwidth. That information will be forthcoming at the official product launch, which is more than likely to occur during the Supercomputing Conference (SC12) in November.

The major design goal of the Knights Corner microarchitecture was to pack a lot of number-crunching capability into a very power-efficient package. They did this by gluing a big vector processor onto a bare-bones x86 core.  In fact, according to Chrysos, only two percent of the Knights Corner die is dedicated to decoding x86 instructions.  The majority of the silicon real estate is devoted to the L1 and L2 caches, the memory I/O, and of course, the vector unit.

With regard to the latter, the 512-bit vector unit is the largest ever developed by Intel. Each one can dispatch 8 double precision or 16 single precision SIMD operations (integer or floating point) per clock cycle.  That’s twice as many as can be delivered by the latest x86 CPUs — the Intel Xeon Sandy Bridge and AMD Bulldozer processors.  And since there will be 50-plus cores on Knights Corner, we’re talking over 400 double precision flops per cycle. Even on a 2 GHz processor, that works out to 800 gigaflops. But since Intel is using its latest 22nm technology process, you know they’re going to be much more aggressive than that.

It’s more than just extra-wide vectors though  Chrysos says the design also incorporates other features optimized for HPC-type workloads.  In particular, they added a special math accelerator they call the Extended Map Unit (EMU), which does polynomial approximations of transcendent functions like square roots, reciprocals, exponents, and so.  The idea is to speed up execution of these functions in hardware.  According to Chrysos, it’s the first EMU for an x86-based processor.

The Knights Corner vector unit also includes a scatter-gather capability, another first for the x86 line. Scatter-gather , which is sometime referred to as vector addressing or vector I/O, is a way to optimize storing and fetching of data from non-contiguous memory addresses. It’s especially useful for processing sparse matrices, which is fundamental to many HPC applications.

As far as memory bandwidth, Chrysos didn’t volunteer information in that regard, other than to say the memory subsystem on Knights Corner will be “very competitive.” Multiple memory controllers will be sprinkled among the cores, and, in such a way as to optimize speed and latency.

Which bring us to the Knights Corner cache setup. Like traditional CPUs, the new chip will incorporate cache coherency in hardware, but in this case, extended to handle a manycore environment. On Knights Corner, L2 cache is 512 KB per core — twice the size of those on the Sandy Bridge Xeons. On top of that they’ve added a translation lookaside buffer (TLB) to speed address translation, tag directories (TDs) to snoop across all of the cores’ L2 caches, and a Dcache capability to simultaneously load and store 512 bits per clock cycle. Finally, Intel included a prefetch capability for the L2, to boost the performance when data is streaming from memory.

All of these capabilities are designed to keep the cores well fed with data, and, as much as possible, avoid the much larger amount of time and energy required to access main memory off the chip. According to Chrysos, based on the Spec FP 2006 benchmarks, these cache features in aggregate have increased per core performance by an average of 80 percent.

For CPU-type architectures, cache coherency is pretty much business as usual. This is quite different from the GPU, which relies less on caches and more on maximizing bandwidth for memory streaming and lots of cores to hide latency. Although the general-purpose GPUs, especially the latest from NVIDIA, have cache hierarchies, they are not globally coherent.

NVIDIA, though, does have a more flexible approach. For example, its L1 cache on Fermi, and now Kepler, is user configurable and can be split between L1 cache and scratchpad memory. The L2 cache is just shared across all the streaming multiprocessors, which are roughly analogous to CPU cores. If coherency is to be maintained on the GPU it must be done in software.

Intel believes it has a “fundamental advantage” in its hardware-based cache coherency, since not only does it minimize the more expensive memory I/O, but it is also easier to program. Along those same lines, Intel will continue to promote x86 programmability as a big advantage of Knights Corner compared to the more specialized CUDA- or OpenCL-based approaches of GPUs. All of this is about to tested on the HPC battlefield later this year. Stay tuned.

 


 

Related Articles

Tracking Xeon Phi’s Roots

Intel Will Ship Knights Corner Chip in 2012

Intel Releases Knights Corner ISA, Lays Groundwork for MIC Launch

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire