Visit additional Tabor Communication Publications
November 12, 2012
MOSCOW, Nov. 12 – T-Platforms, an international supercomputer developer and a supplier of a full range of solutions and services for high-performance computing, announces the completion of the first Russian-built HPC system, delivered to the State University of New York at Stony Brook (SBU).
Founded in 1957, SBU is a member of the Association of American Universities and one of four university centers in the state of New York. Today, the university appears on the U.S. News & World Report list of Top 100 public universities, and is in the Top 25 of Kiplinger's list of 100 best public universities and colleges. SBU is one of 10 universities in the U.S. to have received an award from the U.S. National Science Foundation for outstanding achievements in the integration of research and education. The university faculty includes Nobel Prize winners.
The main user of the computing center’s resources is Professor Oganov's laboratory (The Oganov Lab), which specializes in theoretical mineralogy and materials science research. Today, the lab is actively working on the development of new superhard materials, materials with special electronic and optical properties, and materials for supercapacitors and batteries to store energy. The T-Platforms HPC system will run in-house software, developed by Prof. Oganov, to simulate the structure of new materials using specific properties. The lab has already created a carbon-based structure, similar in hardness to diamond, and has also discovered new forms of boron, sodium and a number of minerals in the Earth’s mantle.
“We have developed a unique method to determine the optimal and stable structures of materials that have never existed before. The algorithm that we have tried to recreate is carefully designed, mimicking the one of that existing in the nature,” comments Professor Artem Oganov of SBU. “Two approaches might be used when creating new materials. The first is to search for all possible combinations of atoms within a crystal structure. The problem is that the number of variations within the structure is astronomical. For example, a structure consisting of just 10 atoms would produce an order of 100 billion structural variations of a new material, and it would take hundreds of simulation years to analyze them all. This approach is impractical, and therefore we have developed an evolutionary method, which requires much less computational effort and shows remarkable reliability. Still, large computing facilities are needed to perform such simulations and find optimal candidates for a new material. The T-Platforms computing system has demonstrated the highest levels of performance in carrying out tasks using this method, and we are planning to expand the system in the near future.”
T-Platforms scale-out ‘V-Class’ system faced serious competition from leading global server manufacturers. It was chosen by SBU, based on an attractive combination of compute density, power efficiency, sustained performance and integrated chassis-level management. T-Platforms was also the only participant to include in its tender bid a complete range of customer services to integrate the supercomputer into the university infrastructure, and test it under different loads in order to fine-tune the system software to specific research needs.
T-Platforms took a comprehensive approach to the design and commissioning of computing system. It is based on T-Platforms’ V5000 enclosure, populated with 10 V205 compute nodes, equipped with AMD Opteron 6238 processors, a management node, and running CentOS operating system. The system's capacity is 2.5 TFlops, and the actual performance exceeded 80% of the peak Linpack results. To meet contract obligations, T-Platforms has fine-tuned the VASP quantum mechanics and molecular dynamics software package, designed to enable modeling of atomic-molecular and electron-nuclear systems. As a result, software performance increased by 27%.
“Western markets are of strategic importance to T-Platforms. Our German office is engaged in system and software research activities, with leading supercomputer centers including Leibnitz Rechenzentrum, Juelich Supercomputer Center, and CSC. Now, with the SBU deal in place, we have taken our first step in bringing Russian supercomputer technologies to the U.S. market,” says Vsevolod Opanasenko, CEO of T-Platforms. “Our expertise helped us gain an edge to win the tender and to implement this landmark project. We are looking forward to a broader cooperation between T-Platforms and SBU, and we hope it will pave the way towards broader engagement with US academic and scientific organizations.”
T-Platforms is an international supercomputer developer and supplier of a full range of solutions and services for high-performance computing. T-Platforms was founded in 2002 and today has headquarters in Moscow (Russia), and regional headquarters in Hanover (Germany), Kiev (Ukraine), Taipei (Taiwan) and Hong Kong (China). The company has implemented over 200 complex projects, six of which were included in the Tор 500 of the world’s most powerful systems. T-Platforms owns patents for a range of supercomputer technologies and electronics components. T-Platforms' solutions are used for fundamental and applied research in various branches of science, including biotechnology, nuclear physics, chemistry, mathematics, and also for the solution of resource-intensive problems in industry, computer graphics and many other areas. In 2011, T-Platforms CEO Geolocation Opanasenko was acknowledged as one of the 12 most famous and respected persons in the world HPC community, according to the HPCWire Internet portal.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.