Visit additional Tabor Communication Publications
March 20, 2009
A couple of random items this week connected only by the inscrutable nature of research funding.
Europe Joins the Petaflop Club
It looks like Europe -- Germany in particular -- will get its first petaflop supercomputer this year with the installation of a new Blue Gene/P. The Juelich Research Center announced that it is partnering with the German Gauss Centre for Supercomputing to procure the system, which is set to boot up around the middle of the year. The 2.2 megawatt machine will be the first Blue Gene to employ water cooling technology, enabling a "91 percent reduction in air conditioning."
If the Germans had installed this last year, say before Los Alamos' Roadrunner came online, it would have been huge news. In fact, it would also have precipitated an Earth Simulator-type panic in the US HPC community. But the reality is that Europe always seems to run 12 to 18 months behind the U.S. in the deployment of top systems.
The way I see it, there's no particular reason Europe has to play catch-up to America in regard to supercomputing prowess. With a 2008 GDP of around $18.9 trillion dollars, the 27-member European Union (EU) actually has a larger capital base from which to draw than the US, which has a 2008 GDP of $14.3 trillion. And it's not like tax rates are particularly low in Europe or the citizenry doesn't support science and technology. If the EU funded PRACE -- the Partnership for Advanced Computing in Europe -- to the extent the US funds HPC in the Department of Energy, we'd see a lot more parity in supercomputing, not to mention a lot more supercomputing in general.
I'm certainly no expert on EU government policy, but the main problem appears to be that the governing bodies are only weakly centralized, so each nation tends to act more in its own self-interest than for the greater good of the EU. Off course, the EU, which was formed in 1993, is a lot younger than the US. And cultural differences in Europe are more starkly defined than in the US. Even so, if taxpayers from Mississippi and Massachusetts can build supercomputers for Tennessee and New Mexico, surely the Europeans can do the equivalent.
Hug a Scientist
While European scientists may envy their HPC-laden US counterparts, all is not joy on this side of the pond. In the New York Times this week, Stanford University professor Stephen Quake writes about what life is like for scientists working in trenches of the modern research university. Quake is a biophysicist and part-time entrepreneur whose interests "lie at the nexus of physics, biology and biotechnology."
Quake describes how much the "business" of science has become such a central facet of research these days. Instead of devoting their lives to teaching and research, professors must now dedicate a good chunk of time gathering funding for their work:
When a university hires a professor, they typically agree to provide a start-up package to support that professor's research over the first few years, after which the professor must seek external funding. This funding is needed to buy research supplies, pay stipends and tuition for graduate students, and even to support the salary of the faculty member. In fact, the university rarely pays the full salary of the professor — depending on the department, the professor must find between 25 percent and 75 percent of his or her salary from outside grants.
Quake notes that at Stanford, despite its huge endowment income and high tuition rates, the money derived from outside grants is the single largest source of funds for the university. Since grant writing is performed by professors, the faculty ends up as the de facto marketing department for the university. What does grant writing have to do with science? Not much, says Quake:
Science at its most interesting is provocative, surprising, counter-intuitive and difficult to plan — and those are very difficult values to institutionalize in an organization or bureaucracy of any size. I have seen my own grant proposals get chewed up and rejected with comments like "typically bold, but wildly ambitious," and wondered why it is wrong to be ambitious in one's research — but perhaps that is a conclusion fully consistent with science by committee.
And you thought those profs had such cushy jobs.
Posted by Michael Feldman - March 20, 2009 @ 11:27 AM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.