Visit additional Tabor Communication Publications
December 10, 2010
Dec. 10 -- The EUREKA ITEA 2 software Cluster ParMA project has developed advanced technologies to exploit multicore architectures in semiconductor chips and so deliver substantial performance improvements for high-performance computing (HPC). ParMA technology has established new goals in modelling and simulation and enabled the development of innovative computer-intensive applications to accelerate research in many domains. It offers substantial improvements in applications such as virtual prototyping to reduce costs and accelerate design of new products. The results are already being exploited in applications such as: the Bullx HPC platform, one of the world's best supercomputers; the UNITE development tool package, which includes a full set of interoperable tools for advanced debugging and analysis; and RECOM simulation software for an automatic 3D simulator.
Efficient computational power is a key differentiator for both research and industry. It is instrumental in modelling, simulation and engineering design. Until relatively recently, it had long been possible to increase the power of processors by boosting the clock frequency. However, ever smaller device sizes combined with ever greater processing needs has meant that physical constraints such as heat dissipation, power consumption and leakages required an alternative approach. Manufacturers tackled this by putting several processors working in parallel onto the same die -- the basic silicon chip -- and developed what is known as multicore architecture.
As a result, software developers have been forced to parallelise their code, otherwise only one core would be used to run a sequential program, and it would execute more slowly since the clock frequency has been reduced. Moreover, simply parallelising the code is not enough; it is necessary to balance the charge on each core, and to make the program scalable so that it automatically adapts to the number of cores available. Such parallel programming is key to taking full advantage of multicore architectures.
Combining complementary interests
The EUREKA project arose out of presentations on complementary targets in HPC by major French computer manufacturer Bull and the High Performance Computing Centre Stuttgart (HLRS) at the University of Stuttgart in Germany during the ITEA project outline days in spring 2006. The two organisations decided on a common project involving other partners from France, Germany, United Kingdom and Spain.
A major problem was that existing parallel programming methods and tools were not able to cope with a high number of tasks or thread. The techniques available were diverse, could not be easily combined and only applied to main parallel programming techniques and on a limited number of platforms. Moreover, HPC applications developers had little experience of parallelisation in terms of how to restructure code and organise the data. And embedded software developers knew very little about multicore architectures.
"The role of each partner was crystal clear from the beginning," explains Jean-Marc Morel of project leader Bull -- an ITEA founding company. "Getting or maintaining advanced technology in this domain is key for these actors and crucial to Europe as well for improving its competitiveness and independence. Indeed, a comprehensive, innovative, integrated and validated set of programming methods and tools to harness multicore architecture is critical for European research as well as European industry -- helping computing-intensive application developers to provide advanced modelling and simulation capabilities."
Improved tools and performance
Key results included the development of mature debugging and performance analysis tools and their integration in a single package that is freely available. ParMA also dramatically improved the performances of more than 12 industrial HPC applications. And the project resulted in superior HPC platforms.
Other benefits included:
Fast exploitation of results
A major outcome has been fast exploitation of results with the impact on the business of the partners already observed. The main one is customer satisfaction with the simulation software editors. An important contract has been signed for instance by RECOM because the optimisation realised with ParMA enables it to use a generic algorithm for an automatic 3D-combustion optimisation in a plant involving several billion possible combinations of parameters. A typical result has been to reduce the fuel consumption on one plant, saving some €125,000 a year and cutting annual CO2 emissions by 16,000 tonnes.
Other simulation software editor partners -- such as GNS for metal forming and crashworthiness and MAGMA for casting process simulation -- have also been able to provide their customers with better capabilities in terms of performance, more refined simulations and more accurate models, and automatic automation. And the improved competitiveness possible with the Bullx HPC platform has enabled Bull to increase revenue substantially.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.