Visit additional Tabor Communication Publications
December 10, 2009
This week Cray announced an Exascale Research Initiative, in which the supercomputer maker will team with a number of European HPC groups to research and develop technologies to support exaflop computing. This mirrors a June announcement by IBM that talks about an exascale research center in Ireland. No big surprises here. Everyone expects Cray and IBM to be pushing the exascale envelope.
But when it comes to talking about exascale applications, I wonder why the prospect of developing more accurate climate models and accelerating energy research is being used as a rationale for why we need such systems. In the Cray press release this week, company CEO Peter Ungaro stated: "We know there are scientific breakthroughs in important areas such as new energy sources and global climate change that are waiting for exascale performance, and we are working hard on building next-generation supercomputers that will be capable of it." It is certainly not the first time the selling of exascale has been linked with climate and energy research, as even a cursory Google search will demonstrate.
Surely I'm not the only one who sees the cognitive disconnect here. The first sustained exaflop machines aren't expected to boot up until the end of the next decade. I hope we're not counting on "scientific breakthroughs" in 2019 to solve our 2009 energy and climate crisis. In case you haven't picked up a newspaper in the last five years or so, a consensus has formed that we're already more than fashionably late to the global housewarming party, the recent "Climategate" dust-up notwithstanding.
A February 2009 article in Scientific American warns that "the risk of catastrophic climate change is getting worse," according to a recent study by United Nations Intergovernmental Panel on Climate Change (IPCC). There's a real possibility that it's already too late to reverse some of the damage resulting from rising sea levels, ocean acidification, and more extreme weather patterns. Quoting Stanford University climatologist Stephen Schneider from the Scientific American piece: "We've dawdled, and if we dawdle more it will get even worse. It's time to move." Notice he didn't say: "Let's run the numbers again with more fidelity and see what gives."
Likewise, relying on exascale computing to help with the development of non-carbon based energy sources seems like a doomed strategy. If we're not well on our way to kicking the oil and gas habit by the end of the next decade, I can't imagine some amped up simulation of wind turbines is going to save us 10 years hence.
It's disheartening to realize how long we've actually known about this problem compared to how little we've done. In watching a several-year-old rerun of a "The West Wing" episode the other day, a discussion of global warming came up that was depressingly similar to the ones we hear today. Let's face it: there are all sorts of low-tech approaches (e.g., conservation, electric vehicles, carbon taxing, etc.) that require nary a FLOP of computing power, but will do a lot to put us on the road to climate redemption. For the past 10 years, the lack of action wasn't related to technological shortcomings, just a lack of political will.
Part of the problem has to be the way we treat the climate and energy research itself, as if it's some sort of lab experiment divorced from reality. We certainly don't demand the same level of scientific scrutiny about decisions related to our personal well-being.
Let's say 9 out of 10 doctors told you that you had a heart condition that will incapacitate you (if not kill you) in ten years, adding that the condition can be remedied by changing your lifestyle. The lifestyle changes would be onerous, but nothing that you wouldn't be able to adapt to. Would you a) demand better proof of the heart condition from the nine doctors in agreement, b) wait for technology that would allow you to eat deep-fried twinkies without the deleterious side-effects or c) suck it up? Only a fool would choose a or b. Yet, so far, those are the two types of options we've chosen in response to our global crisis.
Don't get me wrong. We should certainly continue to employ cutting-edge HPC to drive climate and energy research, from now until forever. The payoff from fusion research alone would be worth it. But to peg exascale computing as a technology lynchpin for our current predicament seems completely misplaced. For the time being we're going to have to make due with our teraflops and petaflops, and hope that when exaflops systems come online we'll still be around for yet grander challenges.
Posted by Michael Feldman - December 10, 2009 @ 2:04 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.