Visit additional Tabor Communication Publications
December 17, 2008
Dec. 17 -- Grid computing technology has long been the darling of cash-strapped academics in desperate need of raw processing power. Now a groundbreaking European research effort has created an industrial-strength platform already appearing in commercial applications.
The SIMDAT project has created a portfolio of tools and services that can finally bring the power of grid computing to industrial applications. Grids capture all the resources of connected computers, from storage to computation.
But up to now grids mostly languished in research labs, where they were used to provide massive processing power or to enable large-scale database management. SIMDAT developed essential business functions for grids, like industrial strength service-level agreements, management and security.
It will mean the advent of virtual organisations, a long-unfulfilled promise of information technology. Grids for business have huge applications in product development, both for data crunching and collaboration, and this was the focus of SIMDAT's work in the automotive, pharmaceutical, aerospace and weather sectors.
But that is the just the beginning, and the ground broken by SIMDAT will prove a fertile field for grid technology over the next decade. Their work and solutions are relevant to other commercial areas and other industrial sectors. SIMDAT partners are already looking at the potential of adapting their work to new industrial sectors, like shipping and media production.
The commercialisation efforts are already well underway and began months before SIMDAT completed the EU-funded part of its work. Elements of SIMDAT's wide-ranging research are already appearing in commercial applications.
Take data compression, for example, one small aspect of SIMDAT's vast research and development programme. SIMDAT made three improvements related to data compression. Large data transfers -- typical in grid applications -- are now more efficient.
First, it boosted basic compression by a factor of 10, a huge achievement in itself. Second, it developed meta-models. By looking at a series of related datasets, computer scientists found that it was possible to 'summarise' their results in a meta-model, and this meta-model provided an accurate analysis of the whole dataset. So data could be exchanged as a meta-model and still be accurate.
The third improvement means it is now possible to make complex queries within summaries (such as why did the behaviour change, or what caused a fault?). By combining these achievements, SIMDAT developed state-of-the-art data compression for industrial grid deployments.
"Data compression technology we developed is now used by most of the automotive companies in Germany and is going to be used by 30 percent of the automobile companies worldwide -- so it is already a mature product. And meta-modelling has become a standard technology inside BAE Systems for numerical optimisation," explains Clemens-August Thole, Fraunhofer SCAI, SIMDAT project coordinator.
Weather without borders
One of SIMDAT's most advanced commercialisation initiatives is VGISC (pronounced Vegis), the Virtual Global Information System Centre. "It is now deployed at 11 met centres worldwide and it is a prototype for a standard to be proposed by the World Meterological Organisation (WMO)," Thole states.
Weather does not recognise frontiers, and while national organisations can easily access weather data within their territory, analysing border regions is a lot more difficult.
Currently, meteorologists and climate researchers must use different tools for data from different national weather centres. VGISC overcomes that problem by leaving all the management, conversion and delivery of data to the SIMDAT portfolio. The SIMDAT solution also provides analysis tools.
SIMDAT is partnered with weather centres in the UK, Germany and France, but VGISC implements part of the WMO's World Information System (WIS). "SIMDAT project is the first and only prototype for a WIS implementation. Ultimately, all the met centres worldwide would adapt this software," explains Thole. "That's the plan."
Scientists will be able to access data from anywhere in the world through their web browser. This will be a huge achievement, involving petabytes of information in one of the most complex scientific fields, involving observations, simulations, analysis and prediction.
Setting the standards
SIMDAT is not only a commercial success, it is important in the world of standards, working with the Web Services Resource Framework (WSRF), the Open Grid Forum and W3C (World Wide Web Consortium), and has been active in Global Information Systems via its work with the WMO.
SIMDAT is a vast project. "You have some results already available as... commercial products (more are to come within the next two years), and then there are also some basic research results, which are more ideas, shown in some prototypes. [These] might turn into commercial solutions, but then again might not," Thole notes.
The upshot, though, is that SIMDAT has already brought commercial solutions to industry, and helped to set the standards for the technology. The project's impact will be felt for a long time.
The SIMDAT project received funding from the ICT strand of the Sixth Framework Programme for research.
This is the third and last part of a three-part series on SIMDAT.
Source: ICT Results -- http://cordis.europa.eu/ictresults
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.