Visit additional Tabor Communication Publications
March 05, 2013
The Intel Xeon Phi has drawn comparisons to its accelerator-class brethren from NVIDIA (Kepler) and AMD (FirePro), but how does the Phi coprocessor measure up to its Xeon "Sandy Bridge" brand-mate? That is the topic of a recent blog from Xcelerit Senior Solution Architect Paul Sutton. The Phi coprocessor is tested against a pair of "Sandy Bridge" E5-2670 server processors, using the Monte-Carlo LIBOR Swaption Portfolio Pricing application as the benchmark.
Sutton starts with a rundown of the pertinent Xeon Phi 5110P stats. This x86 architecture manycore processor has 60 cores with 4x hyperthreading for a total of 240 logical cores. The chip boasts a peak performance of one teraflop (double-precision).
The benchmark algorithm comes from the world of quantitative finance. It's a Monte-Carlo simulation that is used to price a portfolio of LIBOR swaptions (financial swap contracts). Sutton explains that "thousands of possible future development paths for the LIBOR interest rate are simulated using normally-distributed random numbers." Each development path represents one Monte-Carlo path.
The test is performed on an HP ProLiant SL250 server configured with 2 Intel Xeon E5-2670 processors (with 8 cores each and hyperthreading disabled) and the Intel Xeon Phi 5110P coprocessor. The server has 64GB of RAM, runs Red Hat Enterprise Linux 6.2 (64 bit) and Intel Composer XE 2013.
The benchmark compares the performance of two Xeon E5-2670 processors to a single Xeon Phi. The application is run once on the two Sandy Bridge host CPUs (multi-threaded) and then again on the Xeon Phi co-processor in offload mode, where the main executable runs on the host CPU and the Monte-Carlo computation is handled by the Phi chip.
Execution times are measured with respect to the target processors, and the results are recorded. A chart depicts the Phi to Sandy Bridge speedup for both single and double precision performance.
At 100k paths, the Intel Xeon Phi begins to surpass the performance of the two Sandy Bridge CPUs. At one million paths, the Phi is three times faster than the pair of E5s. Sutton observes that the slower Phi performance at lower numbers can be explained by "the added data transfers and the comparably low level of parallelism for a low number of paths (considering both vectorization and multi-threading)."
Interestingly, the speedup is more pronounced using double-precision performance. For example, at 128K paths, single-precision puts Phi at 1.05x faster, and double-precisions puts Phi at 1.24x faster.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.