Visit additional Tabor Communication Publications
June 18, 2008
Supercomputers and Grids – The Future European Ecosystem
June 18 -- The European Grid Initiative Design Study (EGI_DS) welcomes the collaboration of the major European supercomputer projects, such as the Partnership for Advanced Computing in Europe (PRACE) and the Distributed European Infrastructure for Supercomputing Applications (DEISA), to contribute to the construction of the future sustainable grid infrastructure in Europe.
The EGI_DS Project Director Dieter Kranzlmüller states that “Science and research today requires that the best computing tools and services are available at the right point in time. Supercomputers and grids are the scientists’ tools, which drive collaborative computational science to new frontiers. When shaping the future European ecosystem, we need to ensure that users are able to choose between the most appropriate tools for their particular needs.”
This view is supported by Kimmo Koski, Managing Director of the Finnish IT Center for Science, CSC, and a member of PRACE Management Board. “Building a European HPC ecosystem requires active collaboration amongst all stakeholders. Since the users need various scales of computing resources -- from high end supercomputers to clusters -- interoperability such as between the European petascale computing initiative PRACE and EGI, but also with the major European projects such as DEISA and EGEE, is a definite requirement for building sustainable and scalable services. PRACE has a target to collaborate with key European projects, such as EGI, in building a competitive structure for advancing European computational science,” Koski explains.
The European Grid Initiative Design Study (EGI_DS) project represents an effort to establish a sustainable grid infrastructure in Europe. The preparation work is carried out by the EGI Design Study, which is developing a model for the interaction between the new EGI Organization, the National Grid Initiatives (NGIs), and other potential stakeholders. The EGI Organization is expected to evolve over time to take onboard new technologies and changed user needs. EGI should become one of the driving forces of tomorrow’s European research and technology, enabling science to remain at the cutting edge and industry competitive while ensuring sustainable service provisioning to the users.
The achievements of the EGI_DS and objectives for the EGI are presented at ISC’08. The EGI_DS Project Director Dieter Kranzlmüller will give a joint presentation with the Dissemination advisor of DEISA Wolfgang Gentzsch asking: “Supercomputers or Grids: That is the Question.” The presentation will be held on Friday, June 20 from 11:00 a.m. - 12:45 p.m., and of course deals with issues related to the construction of a sustainable grid infrastructure in Europe. Additionally, members of the EGI_DS project team will be present during the whole exhibition at an information stand (C48) for discussions and material distribution.
The EGI Design Study (EGI_DS) project was launched in September 2007 with the support of the European Commission’s 7th Framework Programme. The project will continue until the end of November 2009. According to current plans, the EGI organization will begin its functions in early 2010. The EGI_DS has nine principal partners, CESNET, CERN, CNRS, CSC, DFN, GRNET, GUP, INFN and STFC, and is already endorsed by 38 National Grid Initiatives.
Source: European Grid Initiative
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.