Visit additional Tabor Communication Publications
September 29, 2008
MOSCOW, Sept. 29 -- The Research Computer Center of Lomonosov Moscow State University and the Interdepartmental Supercomputer Center of RAS have announced release of the 9th edition of the Top50 list of the most powerful computers of Russia and the CIS (former Soviet Union countries). The new edition of the list was announced on Sept. 23 at the All-Russian scientific conference "Scientific service on the Internet: Large-scale problems' solution." T-Platforms maintains the leading positions with respect to the number of systems represented in the list (18 systems), followed by Hewlett-Packard (11 systems) and IBM (8 systems).
The 9th Top50 edition has showed further growth of performance of the most powerful computers in the CIS. The total capacity of the represented systems according to Linpack test increased 1.6 times within half year: from 197.1 to 330.1 billion operations per seconds. The amount of new systems in the list (including systems upgraded during the last half year) is 46 percent.
In the 9th edition of the rating, T-Platform managed not only to preserve its previous achievements, but also to set a new record. The real performance of the supercomputer SKIF MSU according to Linpack test amounts to 78.6 percent of the peak one, which is the best efficiency index at the release time among all systems of the first hundred of the TOP500 list based on 4-core Intel Xeon processors. The 26th place in the new rating is occupied by the supercomputer installed in the United Institute of Informatics Problems of the National Academy of Sciences of Belarus with real performance amounting to 81.4 percent of the peak one.
The share of systems used in science and education, as well as in industry, has increased (from 29 to 30 and from 5 to 7, respectively), while the number of systems used in financial and research sectors has declined (from 2 to 1 and from 14 to 12, respectively). The majority of systems represented in the rating are, as before, based on the cluster architecture (49 systems). The number of computers with real performance over 1 TFlops has increased (from 25 to 38), and the lower boundary of the first ten with respect to performance has almost doubled: from 5.2 TFlops to 10.3 TFlops. In order to get to the Top50 list, the performance of no less than 737.7 GFlops in the Linpack test is required now.
The number of systems based on Intel processors is growing (from 38 to 40). At the same time, the number of systems based on AMD processors is getting lower (4 instead of 6 in the previous version). The number of solutions based on IBM processors (5) and systems based on HP processors (1) has remained the same. The number of processor cores in the system is increasing: at least 96 in the new edition, while already 14 systems comprise over 1024 cores.
A lesser number of computers use the communication network Gigabit Ethernet for nodes interaction (8 instead of 9), while usage of InfiniBand communication technology increased from 31 to 33 systems at the expense of Myrinet share (from 8 to 6 systems).
About the Top50 Rating
The joint project of ISC and RSS of MSU aimed at making up a rating of the 50 most powerful computers of the CIS was launched in May 2004. The Top50 rating includes 50 the most powerful computing systems installed on the territory of the CIS countries. The systems are ranked according to real performance indices obtained on Linpack tests in compliance with the world standard. The Top50 rating is updated twice a year and allows on-the-fly tracking the tendencies of development of the supercomputer industry in the CIS. The Web site of the rating is: http://www.supercomputers.ru.
T-Platforms is the leading Russia-based developer and provider of turn-key solutions for high-performance computing. T-Platforms is the only Russia-based company with five in-house developed solutions featured among the world's mightiest computers in the TOP500 global supercomputer list. T-Platforms proficiency lies indesigning turn-key integrated HW/SW solutions of any performance level, complete with system and third-party software and tailored specifically to meet individual customer needs. T-Platforms offers a wide range of products and services for HPC environments and datacenters, including high-performance cluster systems and shared-memory Linux-based supercomputers, storage solutions, specialized software, computer time and datacenter infrastructure facilities for rent, and other professional services at the company's Cluster Solutions Center.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.