SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

By John Russell

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning as images of stars and galaxies and tiny telescopes and giant telescopes streamed across the high definition screen extended the length of Colorado Convention Center ballroom’s stage. One was reminded of astronomer Carl Sagan narrating the Cosmos TV series.

SKA, you may know, is the Square Kilometre Array project being run by an international consortium and intended to build the largest radio telescope in the world; it will be 50 times more powerful than any other radio telescope today. The largest today is  ALMA (Atacama Large Millimeter/submillimeter Array) located in Chile and has 66 dishes.

SKA will be sited in two locations, South Africa, and Australia. The two keynoters Philip Diamond, Director General of SKA, and Rosie Bolton, SKA Regional Centre Project Scientist and Project Scientist for the international engineering consortium designing the high performance computers, took turns outlining radio astronomy history and SKA’s ambition to build on that. Theirs was a swiftly-moving talk, both entertaining and informative. The visuals flashing adding to the impact.

Their core message: This massive new telescope will open a new window on astrophysical phenomena and create a mountain of data for scientists to work on for years. SKA, say Diamond and Bolton, will help clarify the early evolution of the universe, be able to detect gravitational waves by their effect on pulsars, shed light on dark matter, produce insight around cosmic magnetism, create detailed, accurate 3D maps of galaxies, and much more. It could even play a SETI like role in the search for extraterrestrial intelligence.

“When fully deployed, SKA will be able to detect TV signals, if they exist, from the nearest tens maybe 100 stars and will be able to detect the airport radars across the entire galaxy,” said Diamond, in response to a question. SKA is creating a new government organization to run the observatory, “something like CERN or the European Space Agency, and [we] are now very close to having this process finalized,” said Diamond.

Indeed this is exciting stuff. It is also incredibly computationally intensive. Think about an army of dish arrays and antennas, capturing signals 24×7, moving them over high speed networks to one of two digital “signal processing facilities”, one for each location, and then on to two “science data processors” centers (think big computers). And let’s not forget data must be made available to scientists around the world.

Consider just a few data points, shown below, that were flashed across stage during the keynote presentation. The context will become clearer later.

It’s a grand vision and there’s still a long way to go. SKA, like all Big Science projects, won’t happen overnight. SKA was first conceived in 90s at the International Union of Radio Science (URSI) which established the Large Telescope Working Group to begin a worldwide effort to develop the scientific goals and technical specifications for a next generation radio observatory. The idea arose to create a “hydrogen array” able to detect H radiofrequency emission (~1420 MHz). A square kilometer was required to have a large enough collection area to see back into the early universe. In 2011 those efforts consolidated in a not-for-profit company that now has ten member countries (link to brief history of SKA). The U.S. which did participate in early SKA efforts chose not to join the consortium at the time.

Although first conceived as a hydrogen array, Diamond emphasized, “With a telescope of that size you can study many things. Even in its early stages SKA will be able to map galaxies early in the universe’s evolution. When fully deployed it will conduct fullest galaxy mapping in 3D encompassing up to one million individual galaxies and cover 12.5 billon years of cosmic history.”

A two-phase deployment is planned. “We’re heading full steam towards critical design reviews next year,” said Diamond. Full construction starts in two years with construction of the first phase expected to begin in 2019. So far €200 million have been committed for design along with “a large fraction” of the €640 million required for first phase construction. Clearly there are technology and funding hurdles ahead. Diamond quipped if the U.S. were to join SKA and pony up, say $2 billion, they would ‘fix’ the spelling of kilometre to kilometer.

There will actually be two telescopes, one in South Africa about 600 km north of Cape Town and another one roughly 800 km north of Perth in western Australia. They are being located in remote regions to reduce radiofrequency interference from human activities.

“In South Africa we are going to be building close to 200 dishes, 15 meters in diameter, and the dishes will be spread over 150 km. They [will operate] over a frequency range of 350 MHz to 14 GHz. In Australia we will build 512 clusters, each of 256 antennas. That means a total of over 130,000 2-meter tall antennas, spread over 65 km. These low frequency antennas will be tapered with periodic dipoles and will cover the frequency range 50 to 350MHz. It is this array that will be the time machine that observes hydrogen all the way back to the dawn of the universe.”

Pretty cool stuff. Converting those signals into data is a mammoth task. SKA plans two different types of processing center for each location. “The radio waves induce voltages in the receivers that capture them and modern technology allows us to digitize them to higher precision than ever before. From there optical fibers transmit the digital data from the telescopes to what we call central processing facilities or (CPFs). There’s one for each telescope,” said Bolton.

Using a variety of technologies including “some exciting FPGA, CPU-GPU, and hybrids,” CPFs are where the signals are combined. Great care must be taken to first synchronize the data so it enters the processing chain exactly when it should to account for the fact the radio waves from space reached one antenna before reaching another. “We need to correct that phase offset down to the nanosecond,” said Bolton.

Once that’s done a Fourier transform is applied to the data. “It decomposes essentially a function of time into the frequencies that make it up; it moves us into the frequency domain. We do this with such precision that SKA will be able to process 65,000 different radio frequencies simultaneously,” said Diamond

Once the signals have been separated into frequencies they are processed one of two ways. “We can either stack the signals together of various antenna in what we call time domain data. Each stacking operation corresponds to a different direction in the sky. We’ll be able to look at 2000 such directions simultaneously. This time domain processing analysis detects repeating objects such as pulsars or one-off events like gamma ray explosions. If we do find an event, we are planning to store the raw voltage signals at the antennas for a few minutes so we can go back in time and investigate them to see what happened,” said Bolton.

This time domain data can be used by researchers to measure pulsar – which are a bit like cosmic lighthouses – signal arrival times accurately and detect the drift if there is one as a gravitational wave passes through.

“We can also use these radio signals to make images of the sky. To do that we take the signals from each pair of antennas, each baseline, and effectively multiply them together generating data objects we call visibilities. Imagine it will be done for 200 dishes and 512 groups of antennas, that’s 150,000 baselines ad 65,000 different frequencies. That makes up to 10 billion different data streams. Doing this is a data intensive process that requires around 50 petaflops of dedicated digital signal processing.

“Signals are processed inside these central processing facilities in a way that depends on the science that we want to do with them,” said Bolton. Once processed the data are then sent via more fiber optic cables to the Science Data Processors or SDPs. Two of these “great supercomputers” are planned, one in Cape Town for the dish array and one in Perth for low frequency antennas.

“We have two flavors of data within the Science Data Processors. In the time domain we’ll do panning for astrophysical gold, searching over 1.5M candidate objects every ten minutes sniffing out the real astrophysical phenomena such as pulsar signals or flashes of radio light,” said Diamond. The expectation is for a 10,000 to 1 negative-to-positive events. Machine learning will play a key role in finding the “gold.”

Making sense of the 10 billion incoming visibility data streams poses the greatest computational burden, emphasized Bolton: “This is really hard because inside the visibilities (data objects) the sky and antenna responses are all jumbled. We need to do another massive Fourier transform to get from the visibility space that depends on the antenna separations to sky planes. Ultimately we need to develop self-consistent models not only of the sky that generated the signals but also of how each antenna was behaving and even how the atmosphere was changing during the data gathering.

“We can’t do that in one fell swoop. Instead we’ll have several iterations trying to find the calibration parameters and source positions of brightnesses. With each iteration, bit by bit, fainter and fainter signal emerge from the noise. Every time we do another iteration we apply different calibration techniques and we improve a lot of them but we can’t be sure when this process is going to converge [on the best solution] so it is going to be difficult,” said Bolton.

A typical SKA map, she said, will probably contain hundreds of thousands of radio array sources. The incoming images are about 10 petabytes in size. Output 3D images are 5,000 pixels on each axis and 1 petabyte in size.

Distributing this data to scientists for analysis is another huge challenge. The plan is to distribute data via fiber to SKA regional centers. “This is another real game changer that the SKA, CERN, and a few other facilities are bringing about. Scientists will use the computing power of the SKA regional centers to analyze these data products,” said Diamond.

The keynote was a wowing, multimedia presentation, and warmly received by attendees. It bears repeating that many issues remain and schedules have slipped slightly, but it is still a stellar example of Big Science, requiring massively coordinated international efforts, and underpinned with enormous computing resources. Such collaboration is well aligned with SC17’s theme – HPC Connects.

Link to video recording of the presentation: https://www.youtube.com/watch?time_continue=2522&v=VceKNiRxDBc

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Amid Upbeat Earnings, Intel to Cut 1% of Employees, Add as Many

January 24, 2020

For all the sniping two tech old timers take, both IBM and Intel announced surprisingly upbeat earnings this week. IBM CEO Ginny Rometty was all smiles at this week’s World Economic Forum in Davos, Switzerland, after  Read more…

By Doug Black

Indiana University Dedicates ‘Big Red 200’ Cray Shasta Supercomputer

January 24, 2020

After six months of celebrations, Indiana University (IU) officially marked its bicentennial on Monday – and it saved the best for last, inaugurating Big Red 200, a new AI-focused supercomputer that joins the ranks of Read more…

By Staff report

What’s New in HPC Research: Tsunamis, Wildfires, the Large Hadron Collider & More

January 24, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Toshiba Promises Quantum-Like Advantage on Standard Hardware

January 23, 2020

Toshiba has invented an algorithm that it says delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware. In fact, the company's simulated bifurcation algorithm is Read more…

By Tiffany Trader

Energy Research Combines HPC, 3D Manufacturing

January 23, 2020

A federal energy research initiative is gaining momentum with the release of a contract award aimed at using supercomputing to harness 3D printing technology that would boost the performance of power generators. Partn Read more…

By George Leopold

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

TACC Highlights Its Upcoming ‘IsoBank’ Isotope Database

January 22, 2020

Isotopes – elemental variations that contain different numbers of neutrons – can help researchers unearth the past of an object, especially the few hundred isotopes that are known to be stable over time. However, iso Read more…

By Oliver Peckham

Toshiba Promises Quantum-Like Advantage on Standard Hardware

January 23, 2020

Toshiba has invented an algorithm that it says delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware Read more…

By Tiffany Trader

In Advanced Computing and HPC, Dell EMC Sets Sights on the Broader Market Middle 

January 22, 2020

If the leading advanced computing/HPC server vendors were in the batting lineup of a baseball team, Dell EMC would be going for lots of singles and doubles – Read more…

By Doug Black

DNA-Based Storage Nears Scalable Reality with New $25 Million Project

January 21, 2020

DNA-based storage, which involves storing binary code in the four nucleotides that constitute DNA, has been a moonshot for high-density data storage since the 1960s. Since the first successful experiments in the 1980s, researchers have made a series of major strides toward implementing DNA-based storage at scale, such as improving write times and storage density and enabling easier file identification and extraction. Now, a new $25 million... Read more…

By Oliver Peckham

AMD Recruits Intel, IBM Execs; Pending Layoffs Reported at Intel Data Platform Group

January 17, 2020

AMD has raided Intel and IBM for new senior managers, one of whom will replace an AMD executive who has played a prominent role during the company’s recharged Read more…

By Doug Black

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

White House AI Regulatory Guidelines: ‘Remove Impediments to Private-sector AI Innovation’

January 9, 2020

When it comes to new technology, it’s been said government initially stays uninvolved – then gets too involved. The White House’s guidelines for federal a Read more…

By Doug Black

IBM Touts Quantum Network Growth, Improving QC Quality, and Battery Research

January 8, 2020

IBM today announced its Q (quantum) Network community had grown to 100-plus – Delta Airlines and Los Alamos National Laboratory are among most recent addition Read more…

By John Russell

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

Summit Has Real-Time Analytics: Here’s How It Happened and What’s Next

October 3, 2019

Summit – the world’s fastest publicly-ranked supercomputer – now has real-time streaming analytics. At the 2019 HPC User Forum at Argonne National Laborat Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This