Visit additional Tabor Communication Publications
December 12, 2012
Want to be the first one on your block to have a program running on Intel's new Xeon Phi coprocessor. There's a good in-depth article and how to go about this over at Dr. Dobb's Journal. Author Rob Farber goes through the different programming models available to would-be Phi developers and how to squeeze out the maximum performance
Farber points out the Phi is essentially an x86 manycore SMP processor and supports the various parallel programming models -- OpenMP and MPI, in particular. That means that most applications can get up and running with a recompilation, using Intel's own developer toolset.
But according to a previous analysis by Farber, the limited memory capacity on the devices will limit performance for typical OpenMP and MPI applications. According to him, to get performance out of the hardware, you need to make sure you are taking advantage of coprocessor's many cores and its muscular vector unit. "Massive vector parallelism is the path to realize that high performance," writes Farber.
Although there are 60 cores available on the Phi hardware Dr Dobb's obtained (a pre-production part, apparently), four-way hyperthreading allows for up to 240 threads per chip. During testing it was determined that the application should have at least half of the available threads in use. It is tempting to think that non-vector codes could also benefit from the Xeon Phi, powered by thread parallelism alone, but Farber thinks that such applications will not be performance standouts on this platform.
Since the Phi is a PCIe device with just a few gigabyte of memory, it's also important to minimize data transfer back and forth between the CPU's main memory and local store on the coprocessor card. That means doing as little data shuffling as possible and making sure the coprocessor has enough contiguous work to do using local memory. In fact, Farber maintains the a lot of the design effort to boost performance on the Phi will revolve around minimizing data transfers.
The article goes through an example of an OpenMP-based matrix code using the various programming models -- native (entire app runs on the Xeon Phi), offload (the host CPU runs the app; the compute intensive parts are offloaded), and host (the CPU does it all) -- and provides the performance results in each case.
In this case, the native model delivered the best performance, but not all that much better than the offload model. The host model was significantly slower -- on the order of 50 percent. Real applications though, with more complex data transfer requirements, are apt to be behave differently.
In any case, if you aspire to be a Phi developer, the whole article is worth a read.
Full story at Dr. Dobb's Journal
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.