Visit additional Tabor Communication Publications
August 13, 2009
Russian President Dmitry Medvedev thinks his country's supercomputing capabilities need a jump-start. In an address to Russia's Security Council in late July, Medvedev chided his fellow bureaucrats that the country has failed to invest in supercomputing or grid technologies, putting the nation's security and industrial competitiveness at risk. His speech began by laying out the case for these technologies:
It's no secret that the majority of the most developed and advanced nations are focusing on this. It is obvious that the large-scale use of high technology data processing increases the effects of research many times over, radically reduces the cost of designing the most advanced and complex types of products, naturally increases the quality of industrial products, and streamlines business processes. It is precisely for these reasons that the entire world is working on this. Any country that makes headway in relation to creating supercomputers has, of course, advantages in terms of competitiveness, increasing its defence capacities, and strengthening security.
Medvedev went on to complain that Russia ranks only 15th in the aggregate capacity of its supercomputers, noting that "476 out of the 500 supercomputing systems use computers manufactured in the United States of America." Although he didn't mention it, Russia's top system, a 71.3 teraflop (Linpack) HP machine at the Joint Supercomputing Center in Moscow, has less than 7 percent the Linpack performance of the top system in the world, IBM's Roadrunner supercomputer. Even the top 50 systems of the CIS states (the former Soviet Republics) currently have an aggregate Linpack performance of just 382 teraflops, or about one third the power of the single Roadrunner machine. Considering that Russia's 2008 GDP of $2.225 trillion (according to the CIA World Factbook) places it 8th in the world, the country is definitely underachieving in the HPC realm.
Medvedev also brought up the fact that commercial use of supercomputing in Russia is woefully behind the times:
[W]e have only extremely few aircraft (actually one airplane) created on a supercomputer, that is only one that exists in digital form. Everything else is done on Whatman’s drawing paper like in the 1920s and 30s using the old approaches. It’s obvious that here only a digital approach can have a breakthrough effect, lead to dramatic improvements in quality, and reduce the cost of the product.
If all of this sounds familar, you are probably recalling similar speeches delivered by high-level government officials and industry stakeholders in the US, Europe, and Asia over the past several years. But the fact that this HPC cheerleading came from the head of state rather than just a high-level bureaucrat probably bodes well for Russia.
Unfortunately, Medvedev's speech didn't offer much in the way of solutions, except to suggest a general commitment to "invest in the production of supercomputers" and "stimulating demand in every possible way." It gets even fuzzier. It's not clear to what extent Russia wants to rely on foreign HPC technology versus developing its own. As it stands today, IBM, HP and SGI own a good chunk of the Russian HPC server market.
In an ITAR-TASS report in July, Secretary of the Russian Security Council Nikolai Patrushev expressed willingness to cooperate with the US and perhaps other countries on supercomputing technology, but hedged on far those relationships could go. "[W]e are facing a task to use the existing experience, particularly that of other countries, as well as to create our own development base, and we will work on the issue," he said.
One element that has to be taken into account is the country's need to test its nuclear deterrent with supercomputers. I imagine the Russians would get a bit squeamish about depending upon systems or software developed in the West to support its nuclear weapons programs. So don't expect to see IBM shipping Roadrunners to Moscow anytime soon.
Fortunately for Russia, the country does have some critical pieces of an HPC ecosystem already in place, the most important of which is a well-trained cadre of native mathematicians, computer scientists, and engineers. Secondy, there's T-Platforms, Russia's homegrown HPC vendor, that currently supplies about a third of the domestic market. T-Platforms' latest HPC blade offering based on Intel Nehalem chips is capable of scaling up to petascale-sized supercomputers, and I wouldn't be surprised to see such a deployment as early as 2010.
Posted by Michael Feldman - August 13, 2009 @ 5:46 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.