Intel Touts Manycore Coprocessor at Supercomputing Conference

By Michael Feldman

June 20, 2011

Today at the International Supercomputing Conference (ISC) in Hamburg, Germany, Intel outlined the progress it has made over the last year toward bringing its Many Integrated Core (MIC) coprocessor platform to market. MIC is Intel’s answer to general-purpose GPU computing, and like the latter technology, Intel believes it can parlay the its manycore design into future exascale systems.

Recycling the design from the aborted Larrabee graphics processor effort, MIC was recast as an high performance coprocessor for HPC. This product redirection was unveiled in May 2010 during last year’s ISC event. Since then Intel has been passing out MIC software development platforms (SDPs) to selected users in the HPC community.

An SDP is basically a Knights Ferry coprocessor card (the MIC prototype) with up to two GB of GDDR5 memory. The card is hooked up, via PCIe, to a host system with one or more Xeon CPUs.

In a press briefing on June 14, Anthony Neal-Graves, Intel VP and General Manager of Workstations and MIC Computing, reported that at this time last year, they had 10 users running code on Knights Ferry platform. By the end of this month, they’ll have about 50 such users, with the goal of hitting 100 by the end of 2011. According to Neal-Graves, everything was on track for the launch of the first commercial MIC product, known as Knights Corner.

Knights Corner, he said, would arrive in 2012 using Intel’s newly hatched tri-gate 22nm process node. With perhaps an indirect inference to NVIDIA’s and AMD’s GPU computing prowess, Neal-Graves noted that they’ll be able to use their 22nm technology to deliver cheaper, faster and more power-efficient silicon than their competition, adding, “That’s really going to bring the performance to the table that we really need for these types of solutions.”

Performance-wise, MIC has to be able to hit a rather fast-moving target thanks to NVIDIA and AMD upping the FLOPS count for GPUs over the last few years. There are not a lot of performance metrics available for the Knights Ferry prototype, but Intel does claim a one teraflop value for the SGEMM benchmark (measuring a simple single precision matrix multiply). An equivalent value for the latest NVIDIA Tesla part, the M2090, would probably be in the neighborhood of 800 to 900 gigaflops, and perhaps twice that for the the FireStream 9370.

Since Knights Ferry is a 32-core processor (on 45nm technology), the 50-plus-core Knights Corner commercial product coming out next year should easily double the performance numbers of the prototype. But 2012 will also see the introduction of NVIDIA’s “Kepler” GPU, an architecture that aims to triple the performance of the current generation Fermi parts. Also, since Intel has not released any performance numbers for double precision floating point code, it remains to be seen how MIC will perform in this important realm.

Regardless of how the FLOPS shake out, Intel’s is claiming their biggest advantage will be on the software side, since they are promising MIC support under the chipmaker’s existing x86 developer toolset. Specifically, the company is inserting MIC support in their C and Fortran compilers, debuggers, libraries, and even their more exotic offerings, like Cilk Plus, and Threading Building Blocks. And since MIC is fundamentally an x86 manycore processor (with 512-bit wide vector units), even the low-level code structures are similar. The idea is to provide a common programming environment for the x86 developer, or as Neal-Graves put it: “If you can program a Xeon, you can program a MIC processor.”

For simple pieces of code, like the aforementioned SGEMM function, the 18 lines of code that performed the matrix math was identical for the Xeon and Knights Ferry versions. In this case, the Intel compiler and Math Kernel Library (MKL) performed the heavy lifting to execute the Xeon- or MIC-specific code as appropriate.

That shouldn’t lull developers into thinking they can recompile an entire application for MIC. In most cases, they are going to have to modify the source to parallelize their code or the coprocess. If the existing code is already instrumented with OpenMP directives, developers should have a leg up. Intel has implemented OpenMP support for MIC, along with some directive extensions to deal with the coprocessor setup. In general though, the developer can apply the same OpenMP task parallelization model they used for Xeon to MIC.

In fact, the Innovative Systems Lab (ISL) at the National Center for Supercomputing Applications (NCSA) has ported a couple of science codes to Knights Ferry — one a benchmark code, the other a full astronomy application. The benchmark code was used get familar with the software development process, while the astronomy code was a proof-of-concept test for a full application port.

According to Mike Showerman, the Technical Program Manager at ISL, the application code was already written with accelerators in mind, so the initial port was relatively straightforward. Much of effort (which is still ongoing) involves tuning the code to optimize MIC vectorization. The current Intel compiler performs some MIC auto-vectorization for MIC, but support for the coprocessor not fully baked yet. In fact, most of the components of the MIC software stack are in the “alpha” stage at this point.

Other demonstration of MIC-ported applications, and which will be on display at ISC this week, include an SMMP protein folding application by Forschungszentrum Juelich; a molecular dynamics code at KISTI (Korea Institute of Science & Technology Information); a TifaMMy matrix multiplication code at LRZ (Leibniz Supercomputing Center); and a core scaling benchmark from CERN.

Besides priming the pump for future MIC customers, Intel is also lining up system vendors. At ISC, Knights Ferry systems will be showcased by SGI, IBM, HP, Dell, Colfax, and Supermicro. That’s a quite a bit of vendor enthusiasm, considering this is just prototype hardware, and reflects Intel’s pull in the industry.

Besides confirming that the first MIC product would indeed be on 22nm technology, the press briefing last week gave no new details on Knights Corner. But it’s reasonable to speculate to the 2012 product will support PCIe 3.0 since the new PCI interface should shipping with most new servers by next year (not to mention that the Sandy Bridge Xeons are rumored to incorporate that technology on-chip). Also, no mention was made about ECC memory support, but given ECC is a requirement for serious HPC, and the NVIDIA Fermi Tesla GPUs already support it, it’s almost inconceivable that MIC would be launched without it.

As far as when the actual product would be released in 2012, that was left open. However since Intel has made its two MIC major announcements at ISC, it wouldn’t be surprising if they used next year’s conference to launch Knights Corner.

Beyond that first product, Intel has provided no roadmap. A logical next step would be an integrated Xeon-MIC processor, a la AMD’s Fusion APU and NVIDIA’s ‘Project Denver’ chips, but Intel has been tight-lipped about any such architecture, at least publicly. But given the performance and software friendliness of a unified memory space, heterogeneous processor, Intel has got to be thinking about it.

An integrated Xeon-MIC chip could certainly provide a viable platform for exascale supercomputers, and there is no doubt that Intel wants to be a play in this space. During the press briefing, Neal-Graves repeatedly talked about MIC and exascale in the same breath. The chipmaker’s interest in exascale computing is nothing new, but linking it to a particular architecture certainly is.

“We will be investing in the technology and software capabilities to really bring exascale to reality,” said Neal-Graves. “We’re extremely committed to that and we’re going to make that happen.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

2022 HPC Road Trip: LBNL, NERSC, and ESnet Briefings

February 7, 2023

Time to finally(!) clear the 2022 decks and get the rest of the 2022 Great American Supercomputing Road Trip content out into the wild. The last part of the year was grueling with more than 5,000 miles of driving over Read more…

Decarbonization Initiative at NETL Gets Computing Boost

February 7, 2023

A major initiative by U.S. president Joe Biden called EarthShots to decarbonize the power grid by 2035 and the U.S. economy by 2050 is getting a major boost through a computing breakthrough at the National Energy Technol Read more…

Nvidia Touts Strong Results on Financial Services Inference Benchmark

February 3, 2023

The next-gen Hopper family may be on its way, but that isn’t stopping Nvidia’s popular A100 GPU from leading another benchmark on its way out. This time, it’s the STAC-ML inference benchmark, produced by the Securi Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnerships in strategic technologies and defense industries across th Read more…

AWS Solution Channel

Shutterstock 1072473599

Optimizing your AWS Batch architecture for scale with observability dashboards

AWS Batch is a fully managed service enabling you to run computational jobs at any scale without the need to manage compute resources. Customers often ask for guidance to optimize their architectures and make their workload to scale rapidly using the service. Read more…

 

Shutterstock 1453953692

Microsoft and NVIDIA Experts Talk AI Infrastructure

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results. Read more…

Pittsburgh Supercomputing Enables Transparent Medicare Outcome AI

February 2, 2023

Medical applications of AI are replete with promise, but stymied by opacity: with lives on the line, concerns over AI models’ often-inscrutable reasoning – and as a result, possible biases embedded in those models Read more…

2022 HPC Road Trip: LBNL, NERSC, and ESnet Briefings

February 7, 2023

Time to finally(!) clear the 2022 decks and get the rest of the 2022 Great American Supercomputing Road Trip content out into the wild. The last part of the y Read more…

Decarbonization Initiative at NETL Gets Computing Boost

February 7, 2023

A major initiative by U.S. president Joe Biden called EarthShots to decarbonize the power grid by 2035 and the U.S. economy by 2050 is getting a major boost thr Read more…

Nvidia Touts Strong Results on Financial Services Inference Benchmark

February 3, 2023

The next-gen Hopper family may be on its way, but that isn’t stopping Nvidia’s popular A100 GPU from leading another benchmark on its way out. This time, it Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnership Read more…

Intel’s Gaudi3 AI Chip Survives Axe, Successor May Combine with GPUs

February 1, 2023

Intel's paring projects and products amid financial struggles, but AI products are taking on a major role as the company tweaks its chip roadmap to account for Read more…

Roadmap for Building a US National AI Research Resource Released

January 31, 2023

Last week the National AI Research Resource (NAIRR) Task Force released its final report and roadmap for building a national AI infrastructure to include comput Read more…

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed Read more…

Leading Solution Providers

Contributors

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire