Visit additional Tabor Communication Publications
June 22, 2011
MOSCOW, June 22 — T-Platforms, an international developer of supercomputers and supplier of a full range of solutions and services for high-performance computing, announced the completion of its project to modernize Russia’s most powerful supercomputer, “Lomonosov”. As a result, the performance of the computer complex, installed at Lomonosov Moscow State University, has reached the level of 1.3 PFLOPS, unsurpassed in Russia, which positioned it to achieve 13th place in the latest edition of the Top500 most powerful supercomputers in the world.
The Lomonosov supercomputer was created in 2009 to conduct fundamental research in aerospace, nuclear, biomedical, oil and gas and many other branches of science. This computer complex, which had a capacity of 420 Tflops when it was created, has come to embody all the potential of Russia’s supercomputer industry. A host of scientific projects have been carried out with the Lomonosov complex as their basis, by order of some of the largest state and commercial corporations in the country. The ever-growing number of tasks being set, and their increasing complexity, meant that the supercomputer’s performance had to be improved. A decision was taken by the management of Moscow University computer center, in conjunction with T-Platforms’ leading specialists and developers, to undertake a two-stage project to modernize the computer complex.
The first stage, which was completed at the end of 2010, saw the supercomputer’s peak performance being upgraded to 510 Tflops. During the second stage of the project, cutting-edge TB2-TL hybrid blade systems with NVIDIA Tesla X2070 accelerators were added. In the course of modernizing the computer complex, the volume of its data bank was increased by 100 TB. Thus, the initial performance of the Lomonosov supercomputer has now been more than tripled, and a reliable technological basis has been created in order to solve the global scientific problems of the foreseeable future.
“No large scientific projects, whether in the study of global climate change, nanotechnologies, geological surveying, or the development of new materials and the analysis of their interaction, can do without supercomputers today. The complexity of these tasks is growing day by day, which requires us to boost the performance of computer complexes,” says Vsevolod Opanasenko, T-Platforms’ CEO. “In the project to modernize the Lomonosov supercomputer, we relied on T-Platforms’ state-of-the-art technological developments, which enabled us to raise the peak complex capacity to a record-breaking 1.3 Pflops. It is no wonder that such impressive results did not go unnoticed by the Top500 authors, and in its latest edition the Moscow University supercomputer came a well-deserved 13th among the world’s most powerful supercomputers.”
“The scientists at our university have seen for themselves the rich potential of our supercomputers when carrying out practical scientific research,” remarked academician Viktor Sadovnichy, President of Lomonosov MSU. “Today we can be proud of the new materials and technologies developed with the help of modeling on the supercomputer, and in the future, the next-generation high-performance complexes will serve as a basis for the majority of fundamental research. Increasing the performance of the Lomonosov computer complex will enable us to carry out this work at an absolutely new level which could not have been attained in the past.”
“Many key studies in our university are undertaken on the basis of high-performance computer complexes. Moreover, we are actively cooperating with the leading international supercomputer centers,” said Vladimir Voevodin, Deputy Scientific Director of Lomonosov Moscow State University scientific and research computer center. “However, after a while the existing facilities became insufficient for carrying out a number of scientific projects, so a decision was taken to modernize the Lomonosov supercomputer complex. In the course of this project, T-Platforms added computing modules with accelerators to the complex, thus noticeably increasing the efficiency of massively parallel computing. In this way, we have created technological reserves for our scientific activity, and the global supercomputer community has once again been treated to a demonstration of what Russia can do.”
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.