Visit additional Tabor Communication Publications
March 14, 2013
TOKYO, Japan, March 14 — Fujitsu and the National Astronomical Observatory of Japan (NAOJ) today announced that they have jointly developed and recently launched operations of the purpose-built Atacama Compact Array (ACA) Correlator supercomputer system, which will be employed as part of the Atacama Large Millimeter/submillimeter Array (ALMA) project, a Chile-based radio telescope featuring unprecedented sensitivity and resolution. A ceremony was held in Chile to commemorate the inauguration of ALMA on March 13, local time.
Comprised of 35 PRIMERGY x86 servers from Fujitsu and a specialized computational unit, the ACA Correlator meets the rigorous requirements demanded by the project, including computational performance capable of performing real-time processing of 512 billion samples of telescope radio signal data per second at a computational rate of 120 trillion operations per second, as well as the ability to ensure stable operations under harsh environmental conditions at an altitude of 5,000 meters and pressure of 0.5 atmospheres. The system will be responsible for processing massive sets of signal data from 16 antennas on its own.
Set at 5,000 meters above sea level in the Chilean Andes, ALMA is a massive radio telescope developed through a partnership among East Asia (led by NAOJ), North America and Europe. The telescope is capable of producing astronomical radio wave images with the world's highest resolution. The facility consists of 66 antennas arranged in a 18.5 km-diameter array, equivalent to the span of the Yamanote railway loop encircling the central part of Tokyo, and by processing millimeter/submillimeter wave signals from each antenna, it is possible for the antennas to act as a single, giant telescope that can generate radio wave images with the same resolution as those produced by a massive 18.5 km-diameter parabolic antenna. This makes it possible to see the dark regions of the universe that cannot be observed at optical wavelengths, such as galaxies that were formed shortly after the beginning of the universe, the birth of stars, planetary systems like our solar system, and matter related to the origin of life, such as of organic molecules.
NAOJ and the Fujitsu Group worked together to develop the ACA Correlator, a purpose-built supercomputer responsible for processing data from the Atacama Compact Array (ACA), which can make high sensitivity observations.
The system is comprised of 35 PRIMERGY x86 servers from Fujitsu and a specialized computational unit developed by Fujitsu Advanced Engineering Limited. With the ACA Correlator, it is possible to process extremely weak radio wave signals from far-away astronomical bodies by splitting up and processing roughly 500,000 frequency bands and outputting the data in a format that is optimal for observation. This enables resolution capability that makes it possible to observe, for instance, the traveling of a gas in space at a speed of 5 meters per second.
Features of the ACA Correlator
1. Processes massive data sets in real time
ACA Correlator is capable of reading up to 512 billion data samples per second (roughly 200 GB/sec), equivalent to the data transfer speeds supported by 20,000 residential optical broadband lines (at a rate of 100 Mb/sec each).
These massive volumes of data can be processed in real time at an ultrafast computational rate of 120 trillion operations per second. A variety of steps have been taken to reduce the number of required computations, thereby enabling efficient data processing.
2. Stable operations under harsh environmental conditions
The system is able to reliably operate under harsh environmental conditions at an altitude of 5,000 meters and pressure of 0.5 atmospheres. In order to overcome drops in cooling efficiency due to the 0.5 atm pressure, the system was designed with a parallel array of 4,096 identical processing LSI units that are interconnected by 1,024 fiber optic cables. This ensures a sufficient stream of air for cooling, which prevents heat from being unevenly distributed and densely accumulated.
3. Remote maintenance system supporting stable operations
To ensure stable system operations at a high altitude to which it is difficult to dispatch a full-time engineer, it is necessary to perform equipment diagnostics, software upgrades and other maintenance tasks remotely from Japan or the area's base camp, located at an altitude of 2,900 meters.
The system is equipped with a host of features that enable speedy and fine-grained remote operations, including a feature that monitors and records data processing flows at multiple points within the correlator, as well as a feature that improves fault detection accuracy by replicating the system's actual operational status using massive sets of embedded test data.
4. High price performance
Conventionally, correlators have been developed based on custom-designed LSI technology, which has made them very costly to implement.
The ACA Correlator uses a newly developed parallel computation method that employs general-purpose LSIs called FPGAs to split up data received from antennas every 250 microseconds, after which the data is distributed among 4,096 LSIs. This, in turn, delivers excellent price performance.
Comment from Professor Satoru Iguchi, East Asia ALMA Project Manager, National Astronomical Observatory of Japan:
"With the observations from ALMA, we hope to gain insights into such mysteries as how galaxies have formed and evolved, how planetary systems orbiting around a Sun-like star are formed, and whether the origin of life is to be found in the universe. The data processing performed by the ACA Correlator system is essential for these types of radio astronomy research. I am confident that ALMA will open new horizons for astronomy."
About Fujitsu Limited
Fujitsu is the leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Over 170,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited reported consolidated revenues of 4.5 trillion yen (US$54 billion) for the fiscal year ended March 31, 2012.
Source: Fujitsu Ltd.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.