Visit additional Tabor Communication Publications
December 10, 2010
Dec. 10 -- The EUREKA ITEA 2 software Cluster ParMA project has developed advanced technologies to exploit multicore architectures in semiconductor chips and so deliver substantial performance improvements for high-performance computing (HPC). ParMA technology has established new goals in modelling and simulation and enabled the development of innovative computer-intensive applications to accelerate research in many domains. It offers substantial improvements in applications such as virtual prototyping to reduce costs and accelerate design of new products. The results are already being exploited in applications such as: the Bullx HPC platform, one of the world's best supercomputers; the UNITE development tool package, which includes a full set of interoperable tools for advanced debugging and analysis; and RECOM simulation software for an automatic 3D simulator.
Efficient computational power is a key differentiator for both research and industry. It is instrumental in modelling, simulation and engineering design. Until relatively recently, it had long been possible to increase the power of processors by boosting the clock frequency. However, ever smaller device sizes combined with ever greater processing needs has meant that physical constraints such as heat dissipation, power consumption and leakages required an alternative approach. Manufacturers tackled this by putting several processors working in parallel onto the same die -- the basic silicon chip -- and developed what is known as multicore architecture.
As a result, software developers have been forced to parallelise their code, otherwise only one core would be used to run a sequential program, and it would execute more slowly since the clock frequency has been reduced. Moreover, simply parallelising the code is not enough; it is necessary to balance the charge on each core, and to make the program scalable so that it automatically adapts to the number of cores available. Such parallel programming is key to taking full advantage of multicore architectures.
Combining complementary interests
The EUREKA project arose out of presentations on complementary targets in HPC by major French computer manufacturer Bull and the High Performance Computing Centre Stuttgart (HLRS) at the University of Stuttgart in Germany during the ITEA project outline days in spring 2006. The two organisations decided on a common project involving other partners from France, Germany, United Kingdom and Spain.
A major problem was that existing parallel programming methods and tools were not able to cope with a high number of tasks or thread. The techniques available were diverse, could not be easily combined and only applied to main parallel programming techniques and on a limited number of platforms. Moreover, HPC applications developers had little experience of parallelisation in terms of how to restructure code and organise the data. And embedded software developers knew very little about multicore architectures.
"The role of each partner was crystal clear from the beginning," explains Jean-Marc Morel of project leader Bull -- an ITEA founding company. "Getting or maintaining advanced technology in this domain is key for these actors and crucial to Europe as well for improving its competitiveness and independence. Indeed, a comprehensive, innovative, integrated and validated set of programming methods and tools to harness multicore architecture is critical for European research as well as European industry -- helping computing-intensive application developers to provide advanced modelling and simulation capabilities."
Improved tools and performance
Key results included the development of mature debugging and performance analysis tools and their integration in a single package that is freely available. ParMA also dramatically improved the performances of more than 12 industrial HPC applications. And the project resulted in superior HPC platforms.
Other benefits included:
Fast exploitation of results
A major outcome has been fast exploitation of results with the impact on the business of the partners already observed. The main one is customer satisfaction with the simulation software editors. An important contract has been signed for instance by RECOM because the optimisation realised with ParMA enables it to use a generic algorithm for an automatic 3D-combustion optimisation in a plant involving several billion possible combinations of parameters. A typical result has been to reduce the fuel consumption on one plant, saving some €125,000 a year and cutting annual CO2 emissions by 16,000 tonnes.
Other simulation software editor partners -- such as GNS for metal forming and crashworthiness and MAGMA for casting process simulation -- have also been able to provide their customers with better capabilities in terms of performance, more refined simulations and more accurate models, and automatic automation. And the improved competitiveness possible with the Bullx HPC platform has enabled Bull to increase revenue substantially.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.