SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

By John Russell

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning as images of stars and galaxies and tiny telescopes and giant telescopes streamed across the high definition screen extended the length of Colorado Convention Center ballroom’s stage. One was reminded of astronomer Carl Sagan narrating the Cosmos TV series.

SKA, you may know, is the Square Kilometre Array project being run by an international consortium and intended to build the largest radio telescope in the world; it will be 50 times more powerful than any other radio telescope today. The largest today is  ALMA (Atacama Large Millimeter/submillimeter Array) located in Chile and has 66 dishes.

SKA will be sited in two locations, South Africa, and Australia. The two keynoters Philip Diamond, Director General of SKA, and Rosie Bolton, SKA Regional Centre Project Scientist and Project Scientist for the international engineering consortium designing the high performance computers, took turns outlining radio astronomy history and SKA’s ambition to build on that. Theirs was a swiftly-moving talk, both entertaining and informative. The visuals flashing adding to the impact.

Their core message: This massive new telescope will open a new window on astrophysical phenomena and create a mountain of data for scientists to work on for years. SKA, say Diamond and Bolton, will help clarify the early evolution of the universe, be able to detect gravitational waves by their effect on pulsars, shed light on dark matter, produce insight around cosmic magnetism, create detailed, accurate 3D maps of galaxies, and much more. It could even play a SETI like role in the search for extraterrestrial intelligence.

“When fully deployed, SKA will be able to detect TV signals, if they exist, from the nearest tens maybe 100 stars and will be able to detect the airport radars across the entire galaxy,” said Diamond, in response to a question. SKA is creating a new government organization to run the observatory, “something like CERN or the European Space Agency, and [we] are now very close to having this process finalized,” said Diamond.

Indeed this is exciting stuff. It is also incredibly computationally intensive. Think about an army of dish arrays and antennas, capturing signals 24×7, moving them over high speed networks to one of two digital “signal processing facilities”, one for each location, and then on to two “science data processors” centers (think big computers). And let’s not forget data must be made available to scientists around the world.

Consider just a few data points, shown below, that were flashed across stage during the keynote presentation. The context will become clearer later.

It’s a grand vision and there’s still a long way to go. SKA, like all Big Science projects, won’t happen overnight. SKA was first conceived in 90s at the International Union of Radio Science (URSI) which established the Large Telescope Working Group to begin a worldwide effort to develop the scientific goals and technical specifications for a next generation radio observatory. The idea arose to create a “hydrogen array” able to detect H radiofrequency emission (~1420 MHz). A square kilometer was required to have a large enough collection area to see back into the early universe. In 2011 those efforts consolidated in a not-for-profit company that now has ten member countries (link to brief history of SKA). The U.S. which did participate in early SKA efforts chose not to join the consortium at the time.

Although first conceived as a hydrogen array, Diamond emphasized, “With a telescope of that size you can study many things. Even in its early stages SKA will be able to map galaxies early in the universe’s evolution. When fully deployed it will conduct fullest galaxy mapping in 3D encompassing up to one million individual galaxies and cover 12.5 billon years of cosmic history.”

A two-phase deployment is planned. “We’re heading full steam towards critical design reviews next year,” said Diamond. Full construction starts in two years with construction of the first phase expected to begin in 2019. So far €200 million have been committed for design along with “a large fraction” of the €640 million required for first phase construction. Clearly there are technology and funding hurdles ahead. Diamond quipped if the U.S. were to join SKA and pony up, say $2 billion, they would ‘fix’ the spelling of kilometre to kilometer.

There will actually be two telescopes, one in South Africa about 600 km north of Cape Town and another one roughly 800 km north of Perth in western Australia. They are being located in remote regions to reduce radiofrequency interference from human activities.

“In South Africa we are going to be building close to 200 dishes, 15 meters in diameter, and the dishes will be spread over 150 km. They [will operate] over a frequency range of 350 MHz to 14 GHz. In Australia we will build 512 clusters, each of 256 antennas. That means a total of over 130,000 2-meter tall antennas, spread over 65 km. These low frequency antennas will be tapered with periodic dipoles and will cover the frequency range 50 to 350MHz. It is this array that will be the time machine that observes hydrogen all the way back to the dawn of the universe.”

Pretty cool stuff. Converting those signals into data is a mammoth task. SKA plans two different types of processing center for each location. “The radio waves induce voltages in the receivers that capture them and modern technology allows us to digitize them to higher precision than ever before. From there optical fibers transmit the digital data from the telescopes to what we call central processing facilities or (CPFs). There’s one for each telescope,” said Bolton.

Using a variety of technologies including “some exciting FPGA, CPU-GPU, and hybrids,” CPFs are where the signals are combined. Great care must be taken to first synchronize the data so it enters the processing chain exactly when it should to account for the fact the radio waves from space reached one antenna before reaching another. “We need to correct that phase offset down to the nanosecond,” said Bolton.

Once that’s done a Fourier transform is applied to the data. “It decomposes essentially a function of time into the frequencies that make it up; it moves us into the frequency domain. We do this with such precision that SKA will be able to process 65,000 different radio frequencies simultaneously,” said Diamond

Once the signals have been separated into frequencies they are processed one of two ways. “We can either stack the signals together of various antenna in what we call time domain data. Each stacking operation corresponds to a different direction in the sky. We’ll be able to look at 2000 such directions simultaneously. This time domain processing analysis detects repeating objects such as pulsars or one-off events like gamma ray explosions. If we do find an event, we are planning to store the raw voltage signals at the antennas for a few minutes so we can go back in time and investigate them to see what happened,” said Bolton.

This time domain data can be used by researchers to measure pulsar – which are a bit like cosmic lighthouses – signal arrival times accurately and detect the drift if there is one as a gravitational wave passes through.

“We can also use these radio signals to make images of the sky. To do that we take the signals from each pair of antennas, each baseline, and effectively multiply them together generating data objects we call visibilities. Imagine it will be done for 200 dishes and 512 groups of antennas, that’s 150,000 baselines ad 65,000 different frequencies. That makes up to 10 billion different data streams. Doing this is a data intensive process that requires around 50 petaflops of dedicated digital signal processing.

“Signals are processed inside these central processing facilities in a way that depends on the science that we want to do with them,” said Bolton. Once processed the data are then sent via more fiber optic cables to the Science Data Processors or SDPs. Two of these “great supercomputers” are planned, one in Cape Town for the dish array and one in Perth for low frequency antennas.

“We have two flavors of data within the Science Data Processors. In the time domain we’ll do panning for astrophysical gold, searching over 1.5M candidate objects every ten minutes sniffing out the real astrophysical phenomena such as pulsar signals or flashes of radio light,” said Diamond. The expectation is for a 10,000 to 1 negative-to-positive events. Machine learning will play a key role in finding the “gold.”

Making sense of the 10 billion incoming visibility data streams poses the greatest computational burden, emphasized Bolton: “This is really hard because inside the visibilities (data objects) the sky and antenna responses are all jumbled. We need to do another massive Fourier transform to get from the visibility space that depends on the antenna separations to sky planes. Ultimately we need to develop self-consistent models not only of the sky that generated the signals but also of how each antenna was behaving and even how the atmosphere was changing during the data gathering.

“We can’t do that in one fell swoop. Instead we’ll have several iterations trying to find the calibration parameters and source positions of brightnesses. With each iteration, bit by bit, fainter and fainter signal emerge from the noise. Every time we do another iteration we apply different calibration techniques and we improve a lot of them but we can’t be sure when this process is going to converge [on the best solution] so it is going to be difficult,” said Bolton.

A typical SKA map, she said, will probably contain hundreds of thousands of radio array sources. The incoming images are about 10 petabytes in size. Output 3D images are 5,000 pixels on each axis and 1 petabyte in size.

Distributing this data to scientists for analysis is another huge challenge. The plan is to distribute data via fiber to SKA regional centers. “This is another real game changer that the SKA, CERN, and a few other facilities are bringing about. Scientists will use the computing power of the SKA regional centers to analyze these data products,” said Diamond.

The keynote was a wowing, multimedia presentation, and warmly received by attendees. It bears repeating that many issues remain and schedules have slipped slightly, but it is still a stellar example of Big Science, requiring massively coordinated international efforts, and underpinned with enormous computing resources. Such collaboration is well aligned with SC17’s theme – HPC Connects.

Link to video recording of the presentation: https://www.youtube.com/watch?time_continue=2522&v=VceKNiRxDBc

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Launches Commercial Quantum Network with Samsung, ORNL

December 14, 2017

In the race to commercialize quantum computing, IBM is one of several companies leading the pack. Today, IBM announced it had signed JPMorgan Chase, Daimler AG, Samsung and a number of other corporations to its IBM Q Net Read more…

By Tiffany Trader

TACC Researchers Test AI Traffic Monitoring Tool in Austin

December 13, 2017

Traffic jams and mishaps are often painful and sometimes dangerous facts of life. At this week’s IEEE International Conference on Big Data being held in Boston, researchers from TACC and colleagues will present a new Read more…

By HPCwire Staff

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in what has become an overwhelmingly two-socket landscape in the d Read more…

By John Russell

HPE Extreme Performance Solutions

Explore the Origins of Space with COSMOS and Memory-Driven Computing

From the formation of black holes to the origins of space, data is the key to unlocking the secrets of the early universe. Read more…

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as several tech giants jockey to establish a pole position in the race toward commercialization of quantum. This week, Microsoft took the next step in Read more…

By Tiffany Trader

IBM Launches Commercial Quantum Network with Samsung, ORNL

December 14, 2017

In the race to commercialize quantum computing, IBM is one of several companies leading the pack. Today, IBM announced it had signed JPMorgan Chase, Daimler AG, Read more…

By Tiffany Trader

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as several tech giants jockey to establish a pole position in the race toward commercializ Read more…

By Tiffany Trader

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be care Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Microsoft Spins Cycle Computing into Core Azure Product

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization Read more…

By John Russell

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPE In-Memory Platform Comes to COSMOS

November 30, 2017

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This