When an orbiting star gets too close to a galaxy’s central supermassive black hole, it eventually gets torn apart by the immense gravitational forces, a phenomenon known as a “tidal disruption.” Although black holes cannot be seen directly, since their dense mass means that not even light can escape, the inhaled star produces a brief Read more…
Astronomers at the University of Texas have made a series of remarkable discoveries using some of the most powerful supercomputers in existence. Massive numerical …
First proposed in 1991, the Square-Kilometer Array (SKA) project seeks to build and operate the largest radio telescope in the world to peer into the deepest recesses of the cosmos. Instead of seeing light waves, the SKA telescope will turn radio waves into images. The array will be 50 times more sensitive than any other Read more…
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/ALMA_supercomputer_small.jpg” alt=”” width=”97″ height=”79″ />A new petascale supercomputer built to study the universe is one of the fastest calculating machines in the world, and certainly the fastest of its kind. The supercomputer is part of ALMA, a new radio telescope that is claimed to be “largest ground-based astronomical project in existence.”
When brought online, the Square Kilometer Array radio telescope will generate an exabyte of data every day.
The number of scientific instruments available to astronomy researchers for gathering data has grown significantly in recent years, leading to unprecedented amounts of information that requires vast storage and processing capabilities. Canadian researchers are finding a way around this problem with a new solution that combines the best of grid and cloud computing, allowing them to more efficiently reach their research goals.
TeraGrid ’10, the fourth annual conference of the TeraGrid, took place last week in Pittsburgh, Pa. HPCwire will be running a series of articles highlighting the conference. The first in the series covers Gabrielle Allen’s keynote talk on Cactus, an open, collaborative software framework for numerical relativity.
Yale continues build out of HPC infrastructure.
Online, at conferences and in theory, manycore processors and the use of accelerators such as GPUs and FPGAs are being viewed as the next big revolution in high performance computing. If they can live up to the potential, these accelerators could someday transform how computational science is performed, providing much more computing power and energy efficiency.