Visit additional Tabor Communication Publications
October 13, 2011
Back in 2009 the Australian government forked over $80 million to fund a critical part of its “Super Science” initiative. Much of this money went towards the establishment of iVEC’s Pawsey Centre Project. This massive undertaking, which should come online in full in 2013, will provide new supercomputing facilities and expertise to support SKA (Square Kilometre Array) research and other high-end science.
The secondary goal of the Project is to demonstrate Australia’s ability to support HPC in order to bolster its bid to host the SKA, which is critically dependent on advanced computing resources.
Among the systems that are designed to support select research projects and support SKA is the University of Western Australia’s iVEC@UWA “big science” supercomputer, which is being overseen by iVEC, a government-funded organization that encourages the adoption and use of high performance computing, and provides access to supercomputing, large-scale data storage and visualization resources. Much of the work is focused on a specific set of research areas, including radioastronomy, high energy physics, oil and gas discovery and urban planning.
The SGI Fornax super, which is part of the Pawsey Centre Project, boasts 96 nodes, each with two six-core Xeon X5650s, an NVIDIA Tesla C2050, 48 GB RAM—and the ability to handle the big data, big science problems that are being hurled from the radioastronomy and geosciences research camps.
According to a recent report, however, even though the system is churning away, it is serving as something of a testbed. As Richard Chirgwin reported, “The demands of ‘big science’ are so intensive, and the data sets so diverse across different communities, that even a "finished" project is also a development platform for new techniques and applications.”
Chirgwin says that “Part of the problem posed by the huge datasets that Fornax users create is that different researchers will be asking different questions of the same, or similar, data.”Data movement, access and finding ways to make using high-end resources are proving to be challenges with the diverse and large sets.
Pawsey Centre systems architect Guy Robinson explained some of the challenges to Chirgwin, noting that “the scientist isn’t rewarded for spending six months solving problems of data access issues that might only get him or her to the “real” problem they’re trying to solve. They should be able to devote themselves to the problems in front of them, with the underlying computer facilities as invisible as possible."
Full story at The Register
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.