Visit additional Tabor Communication Publications
June 18, 2012
June 18 -- The 39th TOP500 list (http://www.top500.org/lists/2012/06) was released on Monday, June 18, at the 2012 International Supercomputing Conference (ISC) in Hamburg, Germany. Four European systems are in the top 10: SuperMUC at #4, FERMI at #7, JUQUEEN at #8 and CURIE thin nodes at #9.
SuperMUC, installed at Leibniz-Rechenzentrum (LRZ@GCS), Germany, ranks #4 in the TOP500 making it the most powerful system in Europe. SuperMUC is a System X iDataPlex from IBM. It is equipped with more than 155,000 processor cores, which deliver an aggregate peak performance of more than 3 Petaflops (3 quadrillion floating point operations per second, a 3 with 15 zeroes). More than 330 Terabytes of main memory are available for data processing. These data can be transferred between nodes via a non-blocking InfiniBand network with fat tree topology. In addition, up to 10 Petabytes of data can intermediately be stored in parallel file systems based on IBM's GPFS. For permanent storage of user data like program source code, input data etc., a storage solution of NetApp with more than 4 Petabytes capacity is available, renowned for its high reliability. Furthermore, magnetic tape libraries with a capacity of 16.5 Petabytes are available for long-term archiving of data. „Since it is comprised of processors with a standard instruction set that are well known from laptops, PCs and servers, SuperMUC is especially user friendly. This makes adapting user software much easier than for many other of the TOP500 systems that only can achieve high performance by use of special accelerators but can hardly be used for the vast majority of application programs.“, explains Prof. Dr. Arndt Bode, LRZ's Chairman.
FERMI, based on IBM Blue Gene/Q, the supercomputer available to the Italian and European scientific community, today is the 7th most powerful system worldwide. The FERMI new Italian computing system is an IBM Blue Gene/Q configured with 10.240 PowerA2 sockets running at 1.6GHz, with 16 cores each, for a total of 163.840 compute cores and a system peak performance of 2.1 PFlops. Each processor comes with 16Gbyte of RAM (1Gbyte per core). A complex I/O storage subsystem with a total capacity in the order of ten PByte and a front-end bandwidth in excess of 100 GByte/s complements the computing system. “FERMI represents a renovated effort to make available a diverse and powerful offer to the Research community in terms of computational resources,” said Sanzio Bassini, Director of CINECA Supercomputing Department. “Our main mission, with this new large system, is to give a breakthrough support to the European research key-players to face the unsolved societal and scientific challenges.”
JUQUEEN, installed at Jülich Supercomputing Centre (JSC@GCS), Germany, claims 8th place. JUQUEEN is dedicated to users at Forschungszentrum Jülich and Aachen University. In October 2012 the system will be extended and will then also replace JUGENE as Tier-0 system, providing a share of its capacity to the world's leading scientists via the PRACE Calls for Proposals. JUQUEEN is an 8 Rack IBM Blue Gene/Q, offering a peak performance of 1.68 Petaflop/s on 131072 CPU Cores with a total of 131 TB of memory.
CURIE thin nodes, which came in 9th on this impressive list of supercomputers, is a supercomputer of GENCI, located in France at the Très Grand Centre de Calcul (TGCC) operated by CEA near Paris. CURIE is a BULL x86 system based on a modular and balanced architecture of thin (5040 blades, each with 2 sockets based on the latest Intel Xeon E5-2680 processor), large (90 servers, each with 128 cores and 512 GB of memory) and hybrid (144 blades, each with 288 nVIDIA M2090 GPUs) nodes with more than 360 TB of distributed memory and 15 PB of shared disk. Altogether, CURIE delivers a peak performance of 2 Petaflop/s (2 million billion operations a second). “CURIE is fully available since 2012, March 1st and the first outstanding results in life sciences, cosmology, climate modelling, fusion and combustion are proving the need for Europe through PRACE to deploy such multi-petascale systems for supporting worldwide academic and industrial competitiveness.& #8221; said Stéphane Requena CTO of GENCI, one of the 4 hosting members of PRACE.
Two further PRACE Tier-0 systems are listed in the TOP500: Hermit, the Cray XE6 system of High Performance Computing Center Stuttgart (HLRS@GCS) ranked #24 and JUGENE (Jülicher Blue Gene/P) the Gauss Center for Supercomputing's (GCS) IBM Blue Gene/P system ranked #25. These two systems were ranked #12 and #13 respectively only six months earlier (November 2011) which clearly shows how fierce the competition is and how fast the race has to be run.
Catherine Rivière, appointed as PRACE Council Chair on 6th June 2012, congratulates the PRACE hosting members on this achievement: “3 of the 5 machines available through PRACE have made the top 10 out of 500 leading supercomputing systems. This clearly shows that Europe offers the world's best systems to the world's best science.”
The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure (RI) provides a persistent world-class High Performance Computing (HPC) service for scientists and researchers from academia and industry. The Implementation Phase of PRACE receives funding from the EU's Seventh Framework Programme (FP7/2007-2013) under grant agreements n° RI-261557 and n° RI-283493.
Leibniz-Rechenzentrum (LRZ@GSC) is part of the Gauss Centre for Supercomputing (GCS)
CINECA is a non-profit Interuniversity Consortium of 54 Italian Universities, The National Institute of Oceanography and Experimental Geophysics - OGS, the National Research Council - CNR, and the Ministry of Education, University and Research - MIUR.
The Jülich Supercomputing Centre (JSC@GCS) is part of the Gauss Centre for Supercomputing (GCS).
GENCI, Grand Equipement National de Calcul Intensif, is a «société civile» under French law, co-owned by the French State represented by theMinistry for Higher Education and Research , by CEA , by CNRS, by the Universities and by INRIA.
The High Performance Computing Center Stuttgart (HLRS@GCS) of the University of Stuttgart is part of the Gauss Centre for Supercomputing (GCS) and supports researchers and industry with leading edge supercomputing technology.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.