Intel Touts Manycore Coprocessor at Supercomputing Conference

By Michael Feldman

June 20, 2011

Today at the International Supercomputing Conference (ISC) in Hamburg, Germany, Intel outlined the progress it has made over the last year toward bringing its Many Integrated Core (MIC) coprocessor platform to market. MIC is Intel’s answer to general-purpose GPU computing, and like the latter technology, Intel believes it can parlay the its manycore design into future exascale systems.

Recycling the design from the aborted Larrabee graphics processor effort, MIC was recast as an high performance coprocessor for HPC. This product redirection was unveiled in May 2010 during last year’s ISC event. Since then Intel has been passing out MIC software development platforms (SDPs) to selected users in the HPC community.

An SDP is basically a Knights Ferry coprocessor card (the MIC prototype) with up to two GB of GDDR5 memory. The card is hooked up, via PCIe, to a host system with one or more Xeon CPUs.

In a press briefing on June 14, Anthony Neal-Graves, Intel VP and General Manager of Workstations and MIC Computing, reported that at this time last year, they had 10 users running code on Knights Ferry platform. By the end of this month, they’ll have about 50 such users, with the goal of hitting 100 by the end of 2011. According to Neal-Graves, everything was on track for the launch of the first commercial MIC product, known as Knights Corner.

Knights Corner, he said, would arrive in 2012 using Intel’s newly hatched tri-gate 22nm process node. With perhaps an indirect inference to NVIDIA’s and AMD’s GPU computing prowess, Neal-Graves noted that they’ll be able to use their 22nm technology to deliver cheaper, faster and more power-efficient silicon than their competition, adding, “That’s really going to bring the performance to the table that we really need for these types of solutions.”

Performance-wise, MIC has to be able to hit a rather fast-moving target thanks to NVIDIA and AMD upping the FLOPS count for GPUs over the last few years. There are not a lot of performance metrics available for the Knights Ferry prototype, but Intel does claim a one teraflop value for the SGEMM benchmark (measuring a simple single precision matrix multiply). An equivalent value for the latest NVIDIA Tesla part, the M2090, would probably be in the neighborhood of 800 to 900 gigaflops, and perhaps twice that for the the FireStream 9370.

Since Knights Ferry is a 32-core processor (on 45nm technology), the 50-plus-core Knights Corner commercial product coming out next year should easily double the performance numbers of the prototype. But 2012 will also see the introduction of NVIDIA’s “Kepler” GPU, an architecture that aims to triple the performance of the current generation Fermi parts. Also, since Intel has not released any performance numbers for double precision floating point code, it remains to be seen how MIC will perform in this important realm.

Regardless of how the FLOPS shake out, Intel’s is claiming their biggest advantage will be on the software side, since they are promising MIC support under the chipmaker’s existing x86 developer toolset. Specifically, the company is inserting MIC support in their C and Fortran compilers, debuggers, libraries, and even their more exotic offerings, like Cilk Plus, and Threading Building Blocks. And since MIC is fundamentally an x86 manycore processor (with 512-bit wide vector units), even the low-level code structures are similar. The idea is to provide a common programming environment for the x86 developer, or as Neal-Graves put it: “If you can program a Xeon, you can program a MIC processor.”

For simple pieces of code, like the aforementioned SGEMM function, the 18 lines of code that performed the matrix math was identical for the Xeon and Knights Ferry versions. In this case, the Intel compiler and Math Kernel Library (MKL) performed the heavy lifting to execute the Xeon- or MIC-specific code as appropriate.

That shouldn’t lull developers into thinking they can recompile an entire application for MIC. In most cases, they are going to have to modify the source to parallelize their code or the coprocess. If the existing code is already instrumented with OpenMP directives, developers should have a leg up. Intel has implemented OpenMP support for MIC, along with some directive extensions to deal with the coprocessor setup. In general though, the developer can apply the same OpenMP task parallelization model they used for Xeon to MIC.

In fact, the Innovative Systems Lab (ISL) at the National Center for Supercomputing Applications (NCSA) has ported a couple of science codes to Knights Ferry — one a benchmark code, the other a full astronomy application. The benchmark code was used get familar with the software development process, while the astronomy code was a proof-of-concept test for a full application port.

According to Mike Showerman, the Technical Program Manager at ISL, the application code was already written with accelerators in mind, so the initial port was relatively straightforward. Much of effort (which is still ongoing) involves tuning the code to optimize MIC vectorization. The current Intel compiler performs some MIC auto-vectorization for MIC, but support for the coprocessor not fully baked yet. In fact, most of the components of the MIC software stack are in the “alpha” stage at this point.

Other demonstration of MIC-ported applications, and which will be on display at ISC this week, include an SMMP protein folding application by Forschungszentrum Juelich; a molecular dynamics code at KISTI (Korea Institute of Science & Technology Information); a TifaMMy matrix multiplication code at LRZ (Leibniz Supercomputing Center); and a core scaling benchmark from CERN.

Besides priming the pump for future MIC customers, Intel is also lining up system vendors. At ISC, Knights Ferry systems will be showcased by SGI, IBM, HP, Dell, Colfax, and Supermicro. That’s a quite a bit of vendor enthusiasm, considering this is just prototype hardware, and reflects Intel’s pull in the industry.

Besides confirming that the first MIC product would indeed be on 22nm technology, the press briefing last week gave no new details on Knights Corner. But it’s reasonable to speculate to the 2012 product will support PCIe 3.0 since the new PCI interface should shipping with most new servers by next year (not to mention that the Sandy Bridge Xeons are rumored to incorporate that technology on-chip). Also, no mention was made about ECC memory support, but given ECC is a requirement for serious HPC, and the NVIDIA Fermi Tesla GPUs already support it, it’s almost inconceivable that MIC would be launched without it.

As far as when the actual product would be released in 2012, that was left open. However since Intel has made its two MIC major announcements at ISC, it wouldn’t be surprising if they used next year’s conference to launch Knights Corner.

Beyond that first product, Intel has provided no roadmap. A logical next step would be an integrated Xeon-MIC processor, a la AMD’s Fusion APU and NVIDIA’s ‘Project Denver’ chips, but Intel has been tight-lipped about any such architecture, at least publicly. But given the performance and software friendliness of a unified memory space, heterogeneous processor, Intel has got to be thinking about it.

An integrated Xeon-MIC chip could certainly provide a viable platform for exascale supercomputers, and there is no doubt that Intel wants to be a play in this space. During the press briefing, Neal-Graves repeatedly talked about MIC and exascale in the same breath. The chipmaker’s interest in exascale computing is nothing new, but linking it to a particular architecture certainly is.

“We will be investing in the technology and software capabilities to really bring exascale to reality,” said Neal-Graves. “We’re extremely committed to that and we’re going to make that happen.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire