Visit additional Tabor Communication Publications
January 09, 2013
A new petascale supercomputer built to study the universe is one of the fastest calculating machines in the world, and certainly the fastest of its kind. The supercomputer is part of ALMA, a new radio telescope that is claimed to be "largest ground-based astronomical project in existence."
ALMA, which stands for Atacama Large Millimeter/submillimeter Array, is an international project, which includes partners from Europe (European Southern Observatory, Laboratoire d'Astrophysique de Bordeaux), North America (National Radio Astronomy Observatory), and Japan (National Astronomical Observatory of Japan). The Joint ALMA Observatory, based in Santiago Chile, manages the project.
The ALMA radio telescope is a collection of 66 high-precision antennas (parabolic dishes that act as receivers), strewn over the 5,000 meter-high Chajnantor desert plateau in northern Chile. The dry air and elevation makes it a particularly suitable spot for capturing signals from space in the millimeter and sub-millimeter radio spectrum. At those wavelengths, the antennas can detect the so-called "cool Universe," molecular gas and dust as well as residual radiation from the Big Bang.
The antennas can be set to capture signals in a variety of configurations, such that the distance between them can vary between 150 meters to 16 kilometers. That gives the ALMA telescope something akin to a “zoom” capability, as well as very high sensitivity and resolution. As a result, it should be able to produce images 10 times sharper than that of the Hubble Space Telescope.
The challenge of multiple radio antennas is to make them behave as a single receiver, and for that you need some hefty number crunching -- thus the need for a supercomputer. The one built for ALMA is actually a special-purpose device designed to correlate faint signals from multiple sources. Because of its function, the supercomputer is actually known as "the correlator." The supercomputer jargon was added later by the public relation guys to bring attention to its exceptional calculating prowess.
And exceptional it is. The correlator deliver 17 quadrillion operations per second. That's 17 petaOPS (not petaFLOPS). If you discount these are not floating point operations, the system operates at a level comparable to Titan, the fastest general-purpose supercomputer in the world, and the current title-holder on the TOP500.
The ALMA system, which was built by the National Radio Astronomy Observatory (NRAO), uses 32,767 custom ASIC processors to blend the signals from the antenna array. The processors, built on 0.25 micron CMOS technology, run at a modest 125 MHz, with each one drawing just 1.8 watts. But because it is purpose-built for these correlation functions, the silicon is able to deliver 512 billion operations per second (gigaOPS). The processors are arranged 64 to a board, which are connected via a 1 megabit/second Controller Area Network.
There are also 17 ancillary computers involved in acquiring and calibrating data from the correlator hardware. The correlator itself is designed to receive 96 gigabits per second from up to 64 antennas and can sustain an output rate of 1 gigabyte per second.
The supercomputer is not all hard-wired though. According to Rich Lacasse, leader of the ALMA Correlator Team at NRAO, and Joe Greenberg, who worked on the hardware, there are several layers of software. For example, the processor supports about 70 flavors of correlation functions each with programmable features. So coding is required to configure these modes as well as monitor for correct operation. There is also high-level software for configuring the processors.
The proprietary architecture of the correlator was used to overcome cost and power usage constraints of the ALMA project. John Webber, former head of the NRAO Central Development Laboratory, says despite the custom design, their system was built for just $11 million, adding that a comparable general-purpose computer would have cost about $1 billion. Actually a GPU-accelerated supercomputer is quite a bit less expensive these days. Titan, for example, was built for $100 million. Nevertheless, that's still nearly 10 times the cost of the ALMA machine.
Energy efficiency is even more impressive. Thanks to the low-power processors, the correlator consumes just 140 kilowatts of power. (The general-purpose Titan draws 8 megawatts.) But despite the correlator's modest power usage, it takes twice the normal airflow to cool it due to the rarified atmosphere at 5,000 meters (16,500 feet). Also, hard drives operate problematically in the thin air, so the correlator is diskless.
ALMA began collecting data in 2011 with a partial array of radio antennas, with a cut-down version of the correlator being used to combine the signals from the initial array. Today, though, the entire array is operational and the correlator is ready for begin slicing and dicing signals from a larger number of antennas. That will increase its sensitivity and resulting image quality. The project is slated to be completely operational in March 2013.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.