Visit additional Tabor Communication Publications
May 28, 2009
May 28 -- On Tuesday, May 26, the Research Center Jülich reached a significant milestone of German and European supercomputing. Three new super computers have been inaugurated: The Blue Gene based JUGENE computer was upgraded to a petaflop computer, the supercomputer JUROPA, and the fusion machine HPC FF. The symbolic start of all three computers was triggered by the German Federal Minister for Education and Research, Prof. Dr. Annette Schavan, the Prime Minister of North Rhine-Westphalia, Dr. Jürgen Rüttgers, and Prof. Dr. Achim Bachem, chairman of the board of directors at Research Center Jülich as well as high-ranking international guests from academia, industry and politics.
The second new supercomputer named JUROPA (which stands for Juelich Research on Petaflop Architectures) will be used pan European wide by more than 200 research groups, in particular for data-intensive applications. JUROPA is based on a cluster configuration of Sun Blade servers, Intel Nehalem processors and Cluster Operation Software ParaStation from ParTec Cluster Competence Center GmbH, Munich.
The system was jointly developed by experts of the Jülich Supercomputing Center and implemented with partner companies Bull, Sun, Intel, Mellanox and ParTec. It consists of 2208 compute nodes with a total computing power of 207 teraflops and was sponsored by the Helmholtz Community.
Fusion Computer HPC-FF
The concept for the third supercomputer HPC-FF (High Performance Computing – for Fusion) was drawn up by the team headed by Dr. Thomas Lippert, director of the Jülich Supercomputing Centre, optimized and implemented together with the partner companies Bull, SUN, Intel, Mellanox and ParTec. "HPC-FF is closely coupled to the JuRoPA system. So, if required fusion researchers can access computing power totaling 300 teraflop/s," says Prof. Dr. Dr. Lippert, director of the Jülich Supercomputer Centre.
The new best-of-breed cluster system, one of Europe's most powerful, will support advanced research in many areas such as health, information, environment, and energy. It will consist of 1,080 computing nodes each equipped with two Nehalem EP Quad Core processors from Intel. The grand total of 8,640 processors will have a clock rate of 2.93 GHz each, they will be able to access 24 gigabytes of total main memory. Their total computing power of 101 teraflop/s corresponds at the present moment to 30th place in the list of the world's fastest supercomputers. The combined cluster will achieve 300 teraflops/s computing power and the rating in the Top 500 list to be published soon at the next ISC 09 in Hambug will be seen. A rank among the top 10 supercomputers in the world could be expected.
Infiniband ConnectX QDR from the Israeli company Mellanox is used as node interconnect. The administrative infrastructure is based on servers of the type NovaScale R422-E2 from the French supercomputer manufacturer Bull, who will supply the compute hardware and on a SUN ZFS/Lustre Filesystem. The cluster operating system "ParaStation V5" is supplied by the Munich software company ParTec. ParTec's ParaStationV5 cluster operating system combined with Quad Data Rate ( 40 Gb/s) InfiniBand-based high-performance systems delivers an integrated, easy to use and reliable compute cluster environment," says Hugo Falter, COO of ParTec GmbH. "This cluster will provide the foundation for the next generation cluster computers to the worldwide community of users and scientists."
HPC-FF is being funded by the European Commission (EURATOM), the member institutes of EFDA and Forschungszentrum Jülich. The complete system has 3288 compute nodes, 79 TB main memory, 26304 cores, and 308 teraflops peak performance.
ParaStation and GridMonitor -- innovative software solutions not only for petaflop computers
The cluster operating and management software ParaStation and the tool Grid Monitor are playing a significant role as the productive pre-requisite for high-performance clusters.
Experience of many years of ParTec in the development of innovative software solutions for cluster computing constitute an essential precondition for the know-how to enable "High Productivity Cluster Computing". The driving power behind our development of Operating and Managing Software for High Performance Solutions is the motivation to unburden the customer from many daily time consuming problems of cluster administration and make it as productive as possible by providing our service and support.
"Science and industry increasingly rely and profit from simulations on computers of the highest performance class," explained Prof. Thomas Lippert, director of the Jülich Supercomputing Centre."
"Our partnership with Jülich, Bull, Mellanox, SUN and Intel, marks a significant step in the development of commodity supercomputer systems," says Hugo Falter, COO of ParTec GmbH "We expect this alliance to deliver key components for general-purpose petascale cluster systems in Europe."
ParTec Cluster Competence Center GmbH is specialized in development of comprehensive cluster software and support of productive supercomputers. ParaStation, an own developed cluster software stack that creates a parallel environment for Linux clusters in a reliable, stable and highly efficient way is one of the most world class operating and management platforms. ParTec provides vendor-independent consultancy for a sophisticated choice of products and support for professional operation of Linux compute clusters. Our approach ensures rapid deployment cycles and seamless interoperability of software components. ParTec is member of major European and worldwide research consortia (e.g., PROSPECT, PRACE, D-GRID, Unicore etc.) to contribute to the development of Juropa-II, the next Petaflop architecture of supercomputers. The headquarter of ParTec is located in Munich, Germany. www.par-tec.com.
Source: ParTec Cluster Competence Center GmbH
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.