Visit additional Tabor Communication Publications
April 07, 2009
With the advent of general-purpose GPUs, the Cell BE processor, and the upcoming Larrabee chip from Intel, data parallel computing has become the hot new supermodel in HPC. And even though NVIDIA took the lead in this area with its CUDA language environment, Intel has been busy working on Ct, its own data parallel computing environment for manycore computing. On Wednesday at the Intel Developer Forum in Beijing, Senior Vice President Pat Gelsinger announced that the company's Ct research project is on its way to becoming a product, with a beta release scheduled for late this year.
Ct (C/C++ for throughput computing) is a high-level software environment that supports data parallelism in current multicore and future manycore architectures. According to James Reinders, whom I spoke with prior to Gelsinger's announcement, Ct allows scientists and mathematicians to construct algorithms in familiar-looking algebraic notation. Best of all, the programmer does not need to be concerned with mapping data structures and operations onto cores or vector units; Ct's high level of abstraction performs the mappings transparently. The technology also provides determinism so as to avoid races and deadlocks.
"The two big challenges in parallel computing are getting it correct and getting it to scale, and Ct directly takes aim at both," said Reinders.
Unlike CUDA, Brook+, or OpenCL, Ct provides a more high-level approach to data parallel processing, where vectors may be represented as regular or irregular data collections. This enables the programmer to define sparse matrices, trees, graphs, or sets of value-key associations, as well as the more typical dense matrices. The language is implemented as an extension to C++ using the standard template facility, so legacy code can be expanded to include data parallelism by using new Ct data types and operators.
Intel will be adding Ct to its growing portfolio of parallel development tools, including the upcoming Parallel Studio suite, the company's C/C++ and Fortran compilers, Math Kernel Library, debugging and analysis tools, and the Intel Cluster Toolkit. Ct will also be interoperable with Threading Building Blocks (TBB) and Intel's OpenMP implementation so that task-level parallelism can be layered on top of Ct's data parallelism. "Our vision is that you could have TBB coordinating multiple tasks and those tasks could be coded using Ct," explained Reinders.
Although Ct is intrinsically target-agnostic, it does assume a general-purpose CPU-ish architecture with enough vector hardware to make data parallel computing worthwhile. Ct will, however, not support strictly SIMD architectures like NVIDIA and AMD GPUs. Initially this means the first Ct implementation will target x86 multicore chips with Streaming SIMD Extensions (SSE) capability. Conveniently, this includes support for AMD x86 silicon too. All of Intel's current set of compilers and libraries support AMD processors, and Ct will be no different. Unlike the hardware side of the business, Intel's software customers expect x86 compatibility across company lines.
The broader plan for Ct is to provide a platform that allows developers to seamlessly move their software from today's multicore chips to future manycore processors. So an application written for a quad-core Nehalem processor with SSE4 will transparently scale to an eight-core Sandy Bridge chip with Advanced Vector Extensions (AVX), and eventually to a Larrabee processor with its own native vector instruction set.
Beyond x86 support, the long-range vision for Ct is to be able to apply the technology across a range of architectures. Again, Intel the chipmaker is not interested in this as much as Intel the software maker, whose customers are more focused on industry standards rather than pledging allegiance to specific silicon.
Reinders is not quite sure how multi-architecture support will play out. Placing Ct into the open source realm, providing APIs into the code, and initiating direct engagements with interested parties are three possibilities. Alternatively, Ct could be engineered to sit on top of a low-level interface to DirectX or OpenCL, which would provide its own avenue to target independence.
Underlying all this is the customer demand for a parallel programming environment with enough staying power to bridge the multicore-to-manycore transition. There are a plethora of parallel programming products out there today -- CUDA, RapidMind, Cilk++, UPC, and so on -- but customers want to make sure that their software doesn't have to be continually re-coded to new environments. People are just starting to deploy parallel applications on multicore architectures today and are already worried that their current software model isn't going to survive the trip to manycore.
But even the Ct story gets a little murky when you start talking about manycore. Larrabee, Intel's first x86 manycore architecture, which coincidentally provides a lot of data parallel capability, is not the principle target of Ct -- at least not yet. As we reported last year, the first implemention of Larrabee will be targeted to graphics and visual computing applications, not the more general-purpose technical computing applications (seismic analysis, financial analytics, scientific research, high-end imaging, etc.) that Ct is aimed at.
The contradiction here is that Larrabee has demonstrated (at least in simulated tests by Intel) almost perfect scaling across a range of Ct-enabled data parallel apps. No doubt this is due to the architecture's strength in vector processing, where each core includes a 512-bit vector processing unit that can process 16 single-precision floating point numbers at a time. But since the first Larrabee products will have the same limitations for general-purpose computing as a traditional GPU, the initial offerings are not slated for HPC duty.
On the other hand, Reinders certainly expects HPC enthusiasts will want to experiment with Larrabee and will be interested in using Ct as the software platform for such work. At this point though, Intel hasn't decided how much Larrabee support will end up in the initial version of Ct. "I think you can expect to see an answer to that by the end of the year, as Larrabee is coming available," said Reinders.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.