Visit additional Tabor Communication Publications
May 21, 2008
Research project will aim to make parallel programming easier for the masses by developing integrated Transactional Memory systems for multi-core computers
BARCELONA, Spain, May 21 -- A huge challenge facing the computing community today is how to make programming multi-cores easier. With this goal in mind, the VELOX project, titled "An Integrated Approach to Transactional Memory on Multi-Core Computers" and funded with €4 million at the beginning of this year by the European Commission, launched its activities with the objective of delivering seamless transactional memory (TM) systems that integrate well at all levels of the system stack.
Coordinated by the Barcelona Supercomputing Center, the VELOX consortium gathers nine different partners that include top research and system integration organizations such as the University of Neuchâtel, the Technische Universität Dresden, Ecole Politechnique Fédérale de Lausanne, Tel Aviv University, Chalmers University of Technology as well as leading integrators from the IT industry such as AMD, Red Hat and VirtualLogix SAS. This three-year project will obtain some research results that will enable Europe to become lead in a subset of the TM domain.
The adoption of multi-core chips as the architecture-of-choice for mainstream computing will undoubtedly bring about profound changes in the way software is developed. In this brave new era, programs will need to be rewritten in a parallel way for computers that have multiple processing cores. One of the fundamental issues in developing parallel programs is a coordinated and orderly way of accessing shared data. The use of previous techniques such as fine-grained locking as the multi-core programmer's coordination methodology is viewed by most experts as a dead end since locking is too complicated for the average programmer.
The TM programming paradigm is a strong contender to become the approach of choice for replacing those coordination techniques and implementing atomic operations in concurrent programming. Combining sequences of concurrent operations into atomic transactions promises a great reduction in the complexity of both programming and verification, by making parts of the code appear to be sequential without the need to program fine-grained locks. Transactions remove the programming burden of figuring out the interactions among concurrent operations that happen to conflict when accessing the same locations in memory.
"Thanks to the complementary skills of its partners, it will pave the way for key European researchers to make significant contributions to the ongoing revolution to make parallel programming easier for the masses," says Osman Unsal, leader of the VELOX project. Mateo Valero, director of BSC, stressed that "the VELOX project is crucial to enable the supercomputing applications of today to run on the laptops of the near future."
To make TM an effective tool, TM systems will need the right hardware and software support to provide scalability not only in terms of number of cores, but also in terms of code size and complexity. The objective of the VELOX project is to understand how to provide such support by developing an integrated TM stack. Such a TM stack would span a system from the underlying hardware to the high end application and would consist of the following components: CPU, operating system, runtime, libraries, compilers, programming languages and application environments. The team includes internationally recognized TM experts in each of those components. These fully integrated TM systems will not only improve the understanding of TM designs but will greatly help in the adoption of the TM paradigm by the European software industry, making it a tool-of-choice for concurrent programming on multi-core platforms.
Source: Barcelona Supercomputing Center
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.