Visit additional Tabor Communication Publications
May 27, 2010
PARIS, May 27 -- The Military Applications Department of the French Atomic Energy Authority (the CEA) and Bull are today announcing that the CEA's new Tera 100 supercomputer has been powered up for the first time.
The result of a collaborative program between Bull and the CEA which began in 2008, Tera 100 is the first petaflops-scale supercomputer ever designed and developed in Europe. Its theoretical maximum power of 1.25 petaflops means it ranks among the three most powerful supercomputers in the world. Tera 100 is destined for the French nuclear weapons simulation program, aimed at guaranteeing the reliability of nuclear deterrent weapons.
Tera 100 was powered up on 26 May 2010, just a few weeks after its installation in March 2010. Tera 100 consists of 4,300 bullx S Series servers, the model announced on the market by Bull in April 2010. It features 140,000 Intel Xeon 7500 processing cores, 300TB of central memory and a total storage capacity of over 20PB. Its 500GB/sec throughput to the global file system is a world record for a system of this type.
Tera 100 offers exceptional processing capacity. By way of comparison, it can effectively carry out more operations in a single second than the world's population would be capable of performing in 48 hours if each person completed one operation a second, day and night. Its capacity to transfer information is equivalent to a million people watching high-definition films simultaneously and its storage capacity corresponds to over 25 billion books.
"Tera 100 being powered up represents a significant industrial success," commented Jean Gonnord, computer simulation and IT project director at the CEA. "It highlights both the CEA's and Bull's expertise in developing ultra high-performance technologies, to the highest level worldwide, and it fully validates the industrial and research partnership that the CEA and Bull have succeeded in developing: a partnership whose outputs will immediately benefit the whole European scientific and industrial community."
"We are extremely proud of this successful achievement in petaflops-scale systems," confirmed Philippe Miltin, vice president of Bull's Products and Systems Division. "These kinds of technologies are not only fundamentally important for applications such as those at the CEA, but also for the design of the new generation of computing power plants' and massive cloud computing infrastructures; which is why expertise in petaflops technologies is a major asset for France, and Europe as a whole."
"Representing the biggest system ever designed around Intel Xeon processors, Tera 100 demonstrates the appropriateness of using Intel processors for high-performance computing, in terms of cost, power consumption and processing power. We are very proud to be involved in this major project, alongside the CEA and Bull," commented Kirk Skaugen, vice president and group datacenter general manager at Intel.
Close co-operation between Bull and the CEA
The Tera 100 program is a close collaboration between Bull and the CEA in the design and development of new Extreme Computing technologies.
To meet the CEA's requirements, the new supercomputer is distinguished by its ability to run a wide spectrum of applications, its effective balancing of computing power and data flows, and its fault tolerance. A true general-purpose high-productivity system, Tera 100 has been developed around Bull architecture and technologies featuring a vast array of open software and the newest generation Intel Xeon 7500 processors.
In particular, Bull has provided its expertise in the design and production of high-performance servers, as well as the development of the software needed to run such large-scale systems. The CEA, for its part, provided its know-how in system specification, IT architecture and software development, as well as its in-depth understanding of large-scale datacenter infrastructures. Several hundred very high-level engineers and researchers have been involved in this project.
Compared with Tera 10, which went into production in 2005, Tera 100 is 20 times more powerful, occupies the same floor space and is seven times more energy efficient. A few months after bullx was named as the best supercomputer in the world in the USA, Tera 100 confirms the technological expertise that Bull has built up, as well as the CEA's in-depth knowledge of complex infrastructures for high-performance computing (HPC). The success of Tera 100 also highlights the leading role that architectures based on standard components now play in HPC, especially those combining Intel Xeon processors, Linux system and open source software.
About the CEA
The French Alternative Energies and Atomic Energy Commission (CEA) leads research, development and innovation in four main areas: low-carbon energy sources, global defense and security, information technologies and healthcare technologies. The CEA's leadership position in the world of research is built on a cross-disciplinary culture of engineers and researchers, ideal for creating synergy between fundamental research and technological innovation. With its 15,600 researchers and collaborators, it has internationally recognized expertise in its areas of excellence and has developed many collaborations with national and international, academic and industrial partners.
Bull is an information technology company, dedicated to helping corporations and public sector organizations optimize the architecture, operations and the financial return of their information systems and their mission-critical related businesses. Bull focuses on open and secure systems, and as such is the only European-based company offering expertise in all the key elements of the IT value chain. For more information, visit http://www.bull.com.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.