Visit additional Tabor Communication Publications
August 13, 2009
Russian President Dmitry Medvedev thinks his country's supercomputing capabilities need a jump-start. In an address to Russia's Security Council in late July, Medvedev chided his fellow bureaucrats that the country has failed to invest in supercomputing or grid technologies, putting the nation's security and industrial competitiveness at risk. His speech began by laying out the case for these technologies:
It's no secret that the majority of the most developed and advanced nations are focusing on this. It is obvious that the large-scale use of high technology data processing increases the effects of research many times over, radically reduces the cost of designing the most advanced and complex types of products, naturally increases the quality of industrial products, and streamlines business processes. It is precisely for these reasons that the entire world is working on this. Any country that makes headway in relation to creating supercomputers has, of course, advantages in terms of competitiveness, increasing its defence capacities, and strengthening security.
Medvedev went on to complain that Russia ranks only 15th in the aggregate capacity of its supercomputers, noting that "476 out of the 500 supercomputing systems use computers manufactured in the United States of America." Although he didn't mention it, Russia's top system, a 71.3 teraflop (Linpack) HP machine at the Joint Supercomputing Center in Moscow, has less than 7 percent the Linpack performance of the top system in the world, IBM's Roadrunner supercomputer. Even the top 50 systems of the CIS states (the former Soviet Republics) currently have an aggregate Linpack performance of just 382 teraflops, or about one third the power of the single Roadrunner machine. Considering that Russia's 2008 GDP of $2.225 trillion (according to the CIA World Factbook) places it 8th in the world, the country is definitely underachieving in the HPC realm.
Medvedev also brought up the fact that commercial use of supercomputing in Russia is woefully behind the times:
[W]e have only extremely few aircraft (actually one airplane) created on a supercomputer, that is only one that exists in digital form. Everything else is done on Whatman’s drawing paper like in the 1920s and 30s using the old approaches. It’s obvious that here only a digital approach can have a breakthrough effect, lead to dramatic improvements in quality, and reduce the cost of the product.
If all of this sounds familar, you are probably recalling similar speeches delivered by high-level government officials and industry stakeholders in the US, Europe, and Asia over the past several years. But the fact that this HPC cheerleading came from the head of state rather than just a high-level bureaucrat probably bodes well for Russia.
Unfortunately, Medvedev's speech didn't offer much in the way of solutions, except to suggest a general commitment to "invest in the production of supercomputers" and "stimulating demand in every possible way." It gets even fuzzier. It's not clear to what extent Russia wants to rely on foreign HPC technology versus developing its own. As it stands today, IBM, HP and SGI own a good chunk of the Russian HPC server market.
In an ITAR-TASS report in July, Secretary of the Russian Security Council Nikolai Patrushev expressed willingness to cooperate with the US and perhaps other countries on supercomputing technology, but hedged on far those relationships could go. "[W]e are facing a task to use the existing experience, particularly that of other countries, as well as to create our own development base, and we will work on the issue," he said.
One element that has to be taken into account is the country's need to test its nuclear deterrent with supercomputers. I imagine the Russians would get a bit squeamish about depending upon systems or software developed in the West to support its nuclear weapons programs. So don't expect to see IBM shipping Roadrunners to Moscow anytime soon.
Fortunately for Russia, the country does have some critical pieces of an HPC ecosystem already in place, the most important of which is a well-trained cadre of native mathematicians, computer scientists, and engineers. Secondy, there's T-Platforms, Russia's homegrown HPC vendor, that currently supplies about a third of the domestic market. T-Platforms' latest HPC blade offering based on Intel Nehalem chips is capable of scaling up to petascale-sized supercomputers, and I wouldn't be surprised to see such a deployment as early as 2010.
Posted by Michael Feldman - August 13, 2009 @ 5:46 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.