Intel Gets Ready to Push Ct Out of the Lab

By Michael Feldman

April 7, 2009

With the advent of general-purpose GPUs, the Cell BE processor, and the upcoming Larrabee chip from Intel, data parallel computing has become the hot new supermodel in HPC. And even though NVIDIA took the lead in this area with its CUDA language environment, Intel has been busy working on Ct, its own data parallel computing environment for manycore computing. On Wednesday at the Intel Developer Forum in Beijing, Senior Vice President Pat Gelsinger announced that the company’s Ct research project is on its way to becoming a product, with a beta release scheduled for late this year.

Ct (C/C++ for throughput computing) is a high-level software environment that supports data parallelism in current multicore and future manycore architectures. According to James Reinders, whom I spoke with prior to Gelsinger’s announcement, Ct allows scientists and mathematicians to construct algorithms in familiar-looking algebraic notation. Best of all, the programmer does not need to be concerned with mapping data structures and operations onto cores or vector units; Ct’s high level of abstraction performs the mappings transparently. The technology also provides determinism so as to avoid races and deadlocks.

“The two big challenges in parallel computing are getting it correct and getting it to scale, and Ct directly takes aim at both,” said Reinders.

Unlike CUDA, Brook+, or OpenCL, Ct provides a more high-level approach to data parallel processing, where vectors may be represented as regular or irregular data collections. This enables the programmer to define sparse matrices, trees, graphs, or sets of value-key associations, as well as the more typical dense matrices. The language is implemented as an extension to C++ using the standard template facility, so legacy code can be expanded to include data parallelism by using new Ct data types and operators.

Intel will be adding Ct to its growing portfolio of parallel development tools, including the upcoming Parallel Studio suite, the company’s C/C++ and Fortran compilers, Math Kernel Library, debugging and analysis tools, and the Intel Cluster Toolkit. Ct will also be interoperable with Threading Building Blocks (TBB) and Intel’s OpenMP implementation so that task-level parallelism can be layered on top of Ct’s data parallelism. “Our vision is that you could have TBB coordinating multiple tasks and those tasks could be coded using Ct,” explained Reinders.

Although Ct is intrinsically target-agnostic, it does assume a general-purpose CPU-ish architecture with enough vector hardware to make data parallel computing worthwhile. Ct will, however, not support strictly SIMD architectures like NVIDIA and AMD GPUs. Initially this means the first Ct implementation will target x86 multicore chips with Streaming SIMD Extensions (SSE) capability. Conveniently, this includes support for AMD x86 silicon too. All of Intel’s current set of compilers and libraries support AMD processors, and Ct will be no different. Unlike the hardware side of the business, Intel’s software customers expect x86 compatibility across company lines.

The broader plan for Ct is to provide a platform that allows developers to seamlessly move their software from today’s multicore chips to future manycore processors. So an application written for a quad-core Nehalem processor with SSE4 will transparently scale to an eight-core Sandy Bridge chip with Advanced Vector Extensions (AVX), and eventually to a Larrabee processor with its own native vector instruction set.

Beyond x86 support, the long-range vision for Ct is to be able to apply the technology across a range of architectures. Again, Intel the chipmaker is not interested in this as much as Intel the software maker, whose customers are more focused on industry standards rather than pledging allegiance to specific silicon.

Reinders is not quite sure how multi-architecture support will play out. Placing Ct into the open source realm, providing APIs into the code, and initiating direct engagements with interested parties are three possibilities. Alternatively, Ct could be engineered to sit on top of a low-level interface to DirectX or OpenCL, which would provide its own avenue to target independence.

Underlying all this is the customer demand for a parallel programming environment with enough staying power to bridge the multicore-to-manycore transition. There are a plethora of parallel programming products out there today — CUDA, RapidMind, Cilk++, UPC, and so on — but customers want to make sure that their software doesn’t have to be continually re-coded to new environments. People are just starting to deploy parallel applications on multicore architectures today and are already worried that their current software model isn’t going to survive the trip to manycore.

But even the Ct story gets a little murky when you start talking about manycore. Larrabee, Intel’s first x86 manycore architecture, which coincidentally provides a lot of data parallel capability, is not the principle target of Ct — at least not yet. As we reported last year, the first implemention of Larrabee will be targeted to graphics and visual computing applications, not the more general-purpose technical computing applications (seismic analysis, financial analytics, scientific research, high-end imaging, etc.) that Ct is aimed at.

The contradiction here is that Larrabee has demonstrated (at least in simulated tests by Intel) almost perfect scaling across a range of Ct-enabled data parallel apps. No doubt this is due to the architecture’s strength in vector processing, where each core includes a 512-bit vector processing unit that can process 16 single-precision floating point numbers at a time. But since the first Larrabee products will have the same limitations for general-purpose computing as a traditional GPU, the initial offerings are not slated for HPC duty.

On the other hand, Reinders certainly expects HPC enthusiasts will want to experiment with Larrabee and will be interested in using Ct as the software platform for such work. At this point though, Intel hasn’t decided how much Larrabee support will end up in the initial version of Ct. “I think you can expect to see an answer to that by the end of the year, as Larrabee is coming available,” said Reinders.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips are available off the shelf, a concern raised at many recent Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire