Visit additional Tabor Communication Publications
July 25, 2012
SYDNEY, Australia, July 24 -- IBM announced today that the Victoria University of Wellington, on behalf of the Murchison Widefield Array (MWA) Consortium, has selected IBM systems technology to help scientists probe the origins of the universe.
The result of an international collaboration between 13 institutions from Australia, New Zealand, U.S. and India, the MWA is a new type of radio telescope designed to capture low frequency radio waves from deep space as well as the volatile atmospheric conditions of the Sun. The signals will be captured by the telescope’s 4,096 dipole antennas positioned in the Australian Outback in a continuous stream and processed by an IBM iDataPlex dx360 M3 computing cluster that will convert the radio waves into wide-field images of the sky that are unprecedented in clarity and detail.
The IBM iDataPlex cluster replaces MWA’s existing custom-made hardware systems and will enable greater flexibility and increased signal processing. The cluster is expected to process approximately 50 terabytes of data per day at full data rate at a speed of 8 gigabytes per second, the equivalent to over 2,000 digital songs per second, allowing scientists to study more of the sky faster than ever before, and with greater detail.
“The MWA project is dependent on the massive computer power offered by IBM’s iDataPlex to create real-time wide-field images of the radio sky,” said Professor Steven Tingay, MWA Project Director from the International Centre for Radio Astronomy Research at Curtin University in Perth. “The combination of the MWA, IBM technology and the radio-quiet environment of the Murchison will allow us to search for the incredibly weak signals that come from the early stages in the evolution of the Universe, some 13 billion years ago.”
The ultimate goal of the revolutionary $51 million MWA telescope is to observe the early Universe, when stars and galaxies were first born. By detecting and studying the weak radio signals emitted from when the Universe consisted of only a dark void of Hydrogen gas – the cosmic Dark Age - scientists hope to understand how stars, planets and galaxies were formed. The telescope will also be used by scientists to study the sun’s heliosphere during periods of strong solar activity and time-varying astronomical objects such as pulsars.
"Victoria University was delighted to work with the IBM team to find a solution for the compute challenges of the MWA,” said Dr Melanie Johnston-Hollitt, Senior Lecturer in Physics, Victoria University of Wellington. “The IBM iDataPlex cluster provides an elegant resource to handle the processing and imaging requirements of the telescope, allowing us to do cutting-edge radio astronomy."
“IBM is delighted to have been selected by the MWA consortium in this significant global scientific endeavour,” said Glenn Wightwick, Chief Technologist, IBM Australia. “’High performance processing capabilities are essential to facilitating world-class science. The IBM iDataPlex cluster will be used to digitally process incoming signals and produce image data in a standard astronomical format, ready for use by scientists.”
The IBM iDataPlex cluster will be housed on-site in the Murchison Radio Observatory (MRO) site around 700 km north of Perth, near the radio telescope antennas. With a 10 Gbps communications link to Perth, it will allow the images to be transferred and stored and made available for research. The MRO site will also be the Australian location for a significant portion of the Square Kilometre Array (SKA), which will be the world's most powerful radio telescope and is being co-hosted by Australia and South Africa.
The MWA project is led by the International Centre for Radio Astronomy Research at Curtin University and is one of three SKA precursor telescopes.
For more information MWA, please visit:http://www.mwatelescope.org
For more information on IBM iDataplex, please visit: http://www-03.ibm.com/systems/info/x/idataplex/
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.