Visit additional Tabor Communication Publications
November 16, 2009
PORTLAND, Ore., Nov. 16 -- SC09 -- The Numerical Algorithms Group (NAG) announces new HPC performance milestones, including up to four times better performance with multicore optimization for materials science and quantum Monte Carlo applications and reductions of up to 25 percent in runtimes with I/O tuning for an ocean modelling application. These are the early results of NAG's distributed Computational Science and Engineering (dCSE) support program for HECToR (UK's national supercomputing facility), which now consists of over 30 dedicated application optimization projects complementing the traditional HPC user support provided by NAG.
In the first project to complete, a key materials science code, CASTEP, used by academic researchers and industry was enhanced with band-parallelism to allow the code to scale to more than 1,000 cores. The speed of CASTEP on a fixed number of cores was also improved by up to four times on the original, representing a potential saving of around $3M of computing resources over the remainder of the HECToR service. The CASTEP project showed the collaborative nature of the dCSE program, with the University of York undertaking the core development (8 person months) in conjunction with NAG HPC staff and the Science and Technology Facilities Council.
In another project, an ocean modeling application known as NEMO (Nucleus for European Modelling of the Ocean) underwent optimization including I/O techniques and variable resolution approaches to run 25 percent faster on relevant use cases. This represents a $600,000 saving in computing resources for that project with potentially multi-million dollar savings across all NEMO users. The 6 person-month project was performed by a collaboration of the National Oceanography Centre and the University of Edinburgh working with NAG HPC staff.
A third project optimized a quantum Monte Carlo code (CASINO) for better performance on multicore nodes by introducing shared memory techniques and hierarchical parallelism. This resulted in performance gains of up to 4x on quad-core nodes and further performance gains from I/O optimizations for simulations using more than 10,000 cores. Following NAG's work, the scientists were able to run on 40,000 cores of the Jaguar Petaflops supercomputer at Oak Ridge National Laboratory. This 12 person-month dCSE project was undertaken by NAG HPC staff working with users at University College London, and is estimated to have saved the researchers around $1M in computing resources on HECToR.
"These three examples of HPC software projects show the real performance advantages -- and cost savings -- to researchers from enhancing applications to run optimally on the latest HPC machines," said Andrew Jones, vice president of HPC consulting at NAG. "Investment in application performance and algorithms appropriate to the computer architecture has now become critical for efficient use of HPC resources and users' time."
NAG's dCSE projects enhance the performance of dozens of community HPC applications used on HECToR and all performance improvements are fed back into the community so that non-HECToR users can benefit too. Further examples can be found on the NAG Web site (http://www.nag.com/Market/casestudies.asp) or the HECToR Web site (http://www.hector.ac.uk/cse/).
NAG provides worldwide HPC Consulting and Services for academic, government and commercial organizations (http://www.nag.com/hpc ). Inquiries can be directed to NAG's HPC Consulting team through http://www.nag.com/contact_us.asp. HECToR is a Research Councils' UK High End Computing Service.
The Numerical Algorithms Group (NAG), is an Oxford, England, headquartered not-for-profit numerical software development organization founded nearly four decades ago that collaborates with world-leading researchers and practitioners in academia and industry. With offices in Manchester, Chicago, Tokyo and Taipei, and a worldwide distributor network, NAG provides high-quality computational software and high performance computing services to tens of thousands of users, from Global 500 companies, major learning academies, the world's leading supercomputing centers, numerous independent software vendors and many others.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.