Visit additional Tabor Communication Publications
April 25, 2011
April 25 -- While reports vary, some estimate the total cost of Japan's March 11, 2011, earthquake and tsunami at 25 trillion yen, or 330 billion U.S. dollars, making it the most costly natural disaster on record. This is more than three times the size of the second most expensive natural disaster, also an earthquake, and also in Japan in 1995. More than 26,000 are dead or missing and an estimated 400,000 are homeless. Nearly a quarter of Japan's total geography has been altered.
The process of loss assessment, clean-up and rebuilding has begun. One month after the original 9.0 magnitude event, 408 aftershocks greater than 5.0 intensity had occurred and are expected to continue for as long as ten years. Progress, albeit slow at first, is stalled or even reversed with each new tremor.
Damage to their nuclear power facilities may have an impact on the ocean and atmosphere far beyond Japanese shores. A fractured power grid and rolling blackouts have adversely affected essential services that rely on digital resources. Current predictions estimate a ~9 GW national power deficiency this coming summer due to the earthquake and damage to the Fukushima Daiichi facility; this deficiency will force prioritization of essential services, such as health care, security, transportation, education and basic services like air conditioning and elevators in the skyscrapers of metropolitan Tokyo and elsewhere. Some think it could take years before all of their computing resources are back online, and that may be prolonged as spending is prioritized for more urgent needs.
In addition to the humanitarian crisis, Japan's industrial and research communities have been affected which has global impact. Many of the products we use in our daily lives come from factories that were destroyed. Japan's intellectual contribution to the global research community has been interrupted as the systems many relied on to do their work were demolished. Much collaboration between Japanese and U.S. research groups across all domains of science has ground to a halt.
Japan, a very important U.S. economic partner, clearly has need for costly short and long-term assistance. Fluctuations in its economy have a noticeable impact on the U.S. in a variety of ways. Yet, with the recent threat of a government shut-down, the U.S. is dealing with its own financial crisis, making it difficult for Americans to help.
What can we do?
The National Science Foundation's (NSF) TeraGrid is the world's most comprehensive cyberinfrastructure in support of open scientific research. The people who support and use this resource form an unparalleled, multidisciplinary fraternity of innovators and problem solvers. Some have offered solutions that will help in the short-term, and all recognize the need for a more coordinated long-term effort. Following are a few ways the TeraGrid community has begun to help -- gestures that have minimal impact on the U.S. research community while proving to be beneficial to researchers in Japan in the wake of this global tragedy. Hopefully, the examples will inspire additional innovation:
Just recently, Lonestar4 cycles were provided to Japanese researchers from the University of Tokyo, and additional Japanese schools, to model the March 2011 earthquake and tsunami, and the route taken by radioactive content from the Fukushima Daiichi nuclear plant that was dispersed in the ocean and atmosphere.
Recognizing TACC's impact, Dell contributed technology to further expand the organizations efforts to support emergency response efforts. The TACC and Dell teams have since worked to bring together U.S. and Japanese universities in the wake of the earthquake and tsunami.
TeraGrid Forum Chair John Towns is pleased with the immediate response from TeraGrid partners so far, and hopes to see more. "We will work together to develop a more organized and integrated plan to assist Japanese researchers while minimizing the impact to the resources needed by the U.S. research community," he said. "All requests for TeraGrid resources and services are received via TeraGrid's Partnerships Online Proposal System (POPS). Urgent requests are always considered separate and apart from our regular quarterly process," he added.
The NSF encourages the community to apply for funds that will enable more support through the NSF RAPID grant program. RAPID grants are typically around $50,000, up to $200,000 for the most relevant projects. The program funds urgent proposals that address the availability of or access to data, facilities, or specialized equipment, including quick-response research on natural or anthropogenic disasters and similar unanticipated events. Applications will be accepted via NSF Fastlane through April 29, 2011.
"This isn't the first time our TeraGrid family took the initiative to help in a crisis," said NSF's Barry Schneider, TeraGrid program director. "Hopefully their efforts will help Japanese researchers return to some sense of normality, allow the world to gain a better understanding of earthquakes and tsunamis in general, and prevent future loss. It's a great example of how the U.S. investment in science contributes to global scientific, social, and economic progress," he added.
For more information about TeraGrid, visit www.teragrid.org.
Source: Elizabeth Leake, TeraGrid
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.