Visit additional Tabor Communication Publications
February 03, 2011
"Everybody talks about the weather, but nobody does anything about it." That quote is over a 100 years old, but if you swap in "climate change" for "the weather" you have a pretty good update for the 21st century. And if you've been following the news lately or have just stepped outside, you may have noticed that the climate is getting a little, shall we say, unpredictable.
Which of course brings me to high performance computing. Putting an HPC spin on the original quote: it seems like a lot of supercomputing cycles are being devoted to modeling climate change, but not nearly as many to modeling the solutions.
Fortunately though, some are. And there are plenty of solutions out there in need of big-time computer modeling. Among the most talked about solutions are fusion energy, solar power, biofuels, advanced battery technology, fuel cells, and carbon sequestration.
Of these, carbon sequestration -- aka carbon capture and storage -- doesn't seem to get as much press as the others. And that's too bad. Any rationale plan to deal with climate change has to include removing the excess carbon dioxide we've already pumped, and are continuing to pump, into the atmosphere. Carbon sequestration has the advantage of offering a workable solution even if countries fail to cap their carbon emissions. And so far, that seems to be the most likely scenario.
There are lots of ways to capture and store carbon: stimulating uptake by plants via photosynthesis, creating a soil conditioner known as biochar, creating inert carbonates by reacting the C02 with the appropriate minerals, and storing C02 in the ground. They each have their own advantages and disadvantages, but they all share a common unknown: How will this man-made carbon cycling effect the environment? After all, the idea is not to substitute one natural disaster for another.
One of the promising (read least expensive) methods of carbon sequestration is to simply pump the CO2 from fossil fuel burning power plants into geologically stable formations like basalt, depleted oil/gas reservoirs, and saline aquifers. Saline aquifers are particularly attractive, since they are present over wide geographic areas and have really large capacities for C02 storage.
To study the saline solution (so to speak), the hard-charging scientists at Berkeley Lab's Computational Sciences and Engineering and the National Energy Research Scientific Computing Center (NERSC) have developed an industrial-strength simulation code to model CO2 injection into these underground saline reservoirs. A recent article published by Berkeley Lab describes the work in some detail.
Injecting C02 into brine seems simple enough, but the behavior below the surface becomes very complex. Dissolving gas in liquid changes its behavior, in this case, setting up convection currents, which then speeds up the C02 diffusion. Of course, you want to make sure that the C02 stays put over thousands of years, that is, it doesn't vent back to the air, leak into aquifers used for drinking water, or create other dangerous side effects.
The new software developed by the Berkeley team was able to provide a much finer grained model than that of a traditional geological simulation code, and is able to generate a 3D model of the C02 in solution over time. From the Berkeley Lab writeup:
The code combines a computing technique called adaptive mesh refinement (AMR), with high-performance parallel computing to create high-resolution simulations. The team's simulations were performed at NERSC using 2 million processor-hours and running on up to 2,048 cores simultaneously on a Cray XT4 system named Franklin.
Even with that core count and computer time, the initial simulations were fairly modest in size, measuring only at the scale of meters. The eventual goal is to be able to use the physical characteristics of a particular aquifer to predict how much CO2 it can accommodate.
The article says the code could also be adapted to help geologists more accurately track and predict the migration of hazardous wastes underground and, get this, "to recover more oil from existing wells." Sigh.
Posted by Michael Feldman - February 03, 2011 @ 7:33 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.