Visit additional Tabor Communication Publications
September 05, 2011
Infinite demand for supercomputing resources has become the norm with a bevy of increasingly powerful applications limited only by the number of available cores. However, according to some climate researchers in Australia, the country’s progress toward climate modeling goals is being threatened by a lack of additional high performance computing resources.
As TechWorld Australia pointed out, Australia has been a hotbed of new HPC developments over the last couple of years as new projects demand vast increases in current resources. Although spending on HPC is ongoing, researchers in climate studies in particular fields are feeling the pinch as they seek new outlets to run their complex simulations.
The author of a recent piece exploring these conflicts of supply and demand claims that, “Demand for supercomputer access is not only coming from the climate science community. Australia’s bid for the $2.1 billion Square Kilometer Array (SKA) radio telescope is a major driver behind the creation of the Pawsey high performance computing center” even those these extended resources will still rely on grid computing to obtain additional power.
Other projects in Australia to feed scientific research, including the Swinburne University plan for a hybrid supercomputer from SGI, have extended Australia’s resources—but not enough according to some in the climate change research community
According to Dave Griggs, director of the Monash Sustainability Institute and CEO of ClimateWorks in Australia, the software piece of the climate modeling puzzle is in place, but there are significant strains on current resources. He claims that this will threaten new research and interfere with Australia’s position as a climate research leader.
As Griggs told Australia’s TechWorld, “We have an infinite demand for supercomputing and the quality of the predictions you can make are a combination f how good you are at the supercomputing you have got to run on it.”
Griggs went on to note that “you need to be competitive in both of those things. There is no point in putting bad science into the best computer in the world. Equally, there is no point in putting good science into a computer that is not competitive."
Full story at TechWorld Australia
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.