Despite the shortened week here in the U.S. with the President’s Day holiday, there was plenty of news fed by the wellspring bubbling around the new Intel Xeon E7 v2 processors. This generation is split into various optimized segments targeting everything from high performance computing (full details here) and large-scale analytics via a keen focus on memory, core count and efficiency options (and prices that fluctuate accordingly).
The robust memory and 15-cores are being touted as ideal for demanding enterprise environments with “big data” capabilities are being targeted at the enterprise while the 8-socket, 10-core HPC variant packs a price-performance punch, coming in at $3838. As Intel’s Joe Curley told us in the wake of the announcement, “You end up with 120 cores in a node, 12 terabytes of memory maximum, you can fit a lot of density and compute there and there are HPC and many other workloads that this will be ideal for.”
As expected, a number of vendors jumped out front to announce their support for the new processors, including Silicon Mechanics, Supermicro, HP (with a new variant of the ProLiant series) as well as others who will formally release their new E7-equipped servers in the coming weeks.
Here on the HPCwire front, we’ve had something of an exascale-themed week on the podcast. We spoke at length with Dr. Jack Dongarra on Wednesday about the software and application challenges ahead for exascale-class computing. We also checked in on Thursday with Dr. Bill Tang from Princeton University about the G8 funded International Exascale Program. Both of these are certainly worth a listen as several themes in both intersect.
For those interested in forward-looking HPC technologies, stay tuned as tomorrow (Friday) we’ll feature an extended podcast with Dr. Larry Smarr…touching on everything from brain-inspired and quantum computing to software and application construction for new computing models. This comes in the wake of his most recent award, the Golden Goose…interesting, indeed.
GPUs and the Xeon Phi coprocessor were at the heart of two other conversations we had during this week’s daily Soundbite episodes. We followed up on a popular article from last week about the use of GPU computing to tackle massive human settlement mapping and also talked with two scientists working on acceleration/coprocessing approaches to solving tough physics problems at CERN. More details about their considerations of GPUs and Xeon Phi coprocessors can be found in this more comprehensive overview of their challenges.
More Top News from the Week…
The Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP) has just completed its fiscal year 2013 investment in supercomputing capability supporting the DOD science, engineering, test and acquisition communities.
The total acquisition is valued at $50 million, including acquisition of multiple supercomputing systems and hardware, as well as software maintenance services. At nearly three petaFLOPS of computing capability, the acquisition constitutes a more than 50 percent increase in the DOD HPCMP’s current peak computing capability.
ScaleMP and Boston Limited announced a partnership this week. The Boston xScaler-vSMP products deliver up to 8.5 TB RAM – the largest memory footprint for an AMD Opteron computing solution.
The xScaler-vSMP starts with 2 Terabytes (TB) of memory coupled with AMD Opteron Model 6386 SE CPUs at an affordable price, starting at $60,000. Incremental memory options of 2TB are available as additional add-ons. The companies noted this week that the xScaler-vSMP is ideal for high performance computing environments with high speed memory requirements, especially those in the life sciences and in-memory analytics camps.
And For High Frequency Trading Gurus – Lucera Financial Infrastructures announced the availability of its high-performance infrastructure to power electronic trading by financial institutions and high-frequency trading (HFT) firms. The ground-breaking Lucera platform enables customers to accelerate time-to-market, high speed connections to marketplaces and exchanges, and reduce operational and regulatory risk while eliminating the capital expense of building and operating a real-time network of customer and exchange connectivity.
“High frequency trading firms, exchanges, banks and hedge funds can reap huge savings and achieve significant improvements in performance by operating their infrastructure on Lucera,” said Jacob Loveless, CEO of Lucera. “Our customers are able to reach new levels of performance by freeing themselves from the time and capital constraints of internal legacy infrastructure. By using Lucera, clients will be able to connect to more customers and liquidity destinations with much better performance at significantly lower cost.”
For those interested, there’s an excellent, detailed writeup on the full scope of Lucera’s HFT effort over at EnterpriseTech.
On the Horizon
Just a few items worth mentioning for those who make the event circuit.
BigSystem 2014 Issues Call for Papers
ScilabTEC 2014 Registration Now Open
Oh, And Hey…
For those of you who are in HPC outside of the academic and research spectrum, there are a few things we’d like to ask you. Keep in mind that personal and company information is withheld for your security and privacy (as well as that of your organization) but we are curious to see where the emerging technologies we’re writing about so often land on your list of R&D and practical to-do lists.
If you could, please take a gander at our 2014 Risks and Rewards survey, particularly if you and your organization are at the bleeding edge of change. From GPU and coprocessor use and beyond, your results will help us deliver a free, open document that lets you see where your peers are placing their bets for 2014 and beyond.
Here’s a link to the survey… http://survey.gabrielconsultinggroup.com/limesurvey/index.php?sid=82547&lang=en
Thanks for considering—and for your time on this. We can’t wait to share what we’ve learned…but we need your input.