Visit additional Tabor Communication Publications
July 07, 2011
Paul Valdes, a climatology researcher at the University of Bristol recently argued that existing climate models have been unable to simulate abrupt climate changes due to oversimplifying the factors involved. This means that we do not understand past climate-shaping events and of more immediate concern, he says this could render us unable to predict massive changes.
In his editorial in Nature Geoscience, Valdes claims that looking to historical climate shifts and the conditions that sparked them is difficult due to the number of factors involved, but if we are to react to coming changes, we require more sophisticated models to understand these events.
Historical events like the Palaeocene-Eocene Thermal Maximum shift that was marked by rapid warming could explain (and warn us) of future rapid climate changes but current models cannot simulate the climate that preceded the change.
As a discussion of Valdes' argument in Ars Technica pointed out, “Although climate models have been accused of being overly sensitive to changes in greenhouse gasses, it seems that in some cases, the models are too stable, requiring larger perturbations to cause the actual changes seen in the past.”
Valdes argues that because of this over-stability the models are underestimating the possibility of vast, rapid climate shifts and might lead to “a false sense of security.”
Full story at Ars Technica
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.