Visit additional Tabor Communication Publications
August 11, 2006
Researchers at The University of Texas at Austin's Center for Relativity (CfR) are using computers from the Texas Advanced Computing Center (TACC) to provide a better understanding of the interactions between spinning black holes.
The CfR research team, Director Richard Matzner, postdoctoral fellow Scott Hawley (who recently joined the faculty at Belmont University) and undergraduate student Michael Vitalo, investigated the strength of the gravitational attraction between two black holes as the direction of each hole's spin changed, as a way of better understanding the dynamics of what are thought to be the strongest sources of gravitational waves detectable on earth. These findings are the subject of the authors' latest paper -- "Spin Dependence in Computational Black Hole Data" --- and will ultimately expand our understanding of the universe. The paper has been submitted to the American Physical Society, Physical Review D.
The Search for Ripples in Spacetime
The field of gravitational physics is in the midst of a great revival, largely driven by the construction of gravitational wave detectors such as LIGO (Laser Interferometer Gravitational Wave Observatory). These sophisticated laser interferometers are the most sensitive instruments ever designed by man (sensitive to 1 part in 10^21) and are designed to measure tiny ripples in the fabric of spacetime, called gravitational waves, from distant astronomical sources. The strongest of these sources are binary black holes, which spiral in towards one another and merge to form a single black hole, all the while giving off strong gravitational waves.
To separate the astrophysical signals from background noise in the detector, scientists need to have a solid idea of what they're trying to find. This is a "needle in the haystack" problem of grand proportions, and sophisticated "template" waveforms are in great demand for use in picking out the true signals from the detectors' data.
These template wave forms can be obtained through closed-form analytical calculations when the holes are far apart and after the final merger occurs, but for the "in between" period, only numerical computations are able to provide the full solution to Einstein's nonlinear gravitational equations. Recently, the field of "numerical relativity" has itself been undergoing great change, as sophisticated 3-D simulations of binary black hole mergers have finally begun to simulate beyond a single orbit. Simulations of multiple orbits are needed for accurate waveforms; however, Einstein's nonlinear equations pose such significant challenges that up until the past year, all simulations would crash prior to a full orbit due to numerical instabilities.
Simulations performed by Caltech's Frans Pretorius using TACC's 1024-processor Linux cluster, Lonestar, represented the first full inspiral and merger simulation that lasted longer than an orbit. More recently, UT Brownsville and NASA Goddard Space Flight Center have extended these results using new "gauge conditions" for the coordinates of the simulation. These developments were featured in a recent science news article (R. Cowen, "Crash: Ripples of space-time debut in black holes simulations," Science News 169, 2006).
Almost all of these simulations, however, neglect the significant role that the black holes' spin will have on the evolution of the system. This current work by Hawley, Vitalo and Matzner aims to provide greater insight into the role of spin for binary black hole systems.
The Role of "Spin" in Black Hole Interactions
According to general relativity, the rotation of an object can have a gravitational effect on the object's surroundings, in addition to the usual gravitational attraction due to the object's mass. This latter effect is dominant, so two objects are always attracted, but in the case of two spinning masses, their spins can provide either an additional attraction or a small repulsion to the overall gravitational interaction. In this way, two spinning black holes can be loosely compared to a pair of magnets, which repel each other when their north poles are facing one another, and attract each other when opposite poles are near each other. The actual interactions for spinning black holes is more complicated than those of magnets, and the precise angular dependence of the spin-spin effects can only be computed either analytically in perturbative asymptotic regimes, or generally in full numerical simulations.
Using 513^3 grid points for high-resolution simulations on TACC's Lonestar system, the present study was unable to accurately confirm the separation-dependence of the spin-spin effects. A new mesh-refined version of the code is nearly ready for production, and should be able to treat these divergent length scales much more efficiently than the unigrid code, thereby studying the separation-dependence accurately. Solving Einstein's tensor equations requires storage for the sixteen grid functions, and the multigrid method adds eight more. Nearly all of these need to be defined at as high a resolution as possible to resolve the black holes and have the outer boundaries far apart. This study used grids of 513^3 grid points, which implies about 32 gigabytes of RAM needed for storage. This particular problem maps well with the distributed-memory architecture of the Lonestar cluster, and initial runs were carried out utilizing 32 processors each. The original version of this research code was ready for production in September 2005 and was deployed on TACC's Lonestar cluster. Throughout the project, TACC provided technical support to the researchers by providing recommendations for using the system effectively and helping to debug application issues as they arose during the deployment phase.
Lonestar is now on its way to becoming one of the most powerful supercomputers in the world. The Dell Linux cluster, which is being upgraded to 1,300 Dell PowerEdge 1955 blade servers, will provide a theoretical peak performance of more than 55 teraflops once the system achieves full production status on October 1. The architecture of the latest Intel Xeon dual-core processors and increased floating point and memory performance, combined with a high-speed InfiniBand interconnect, will result in greater performance and scalability of applications that run on Lonestar.
"The upgrade to Dell PowerEdge processors will prove almost a factor 10 in throughput," said Richard Matzner. "This will mean the ability to simulate a much broader range of spin-spin configurations, and will allow much higher resolution and much higher precision simulations for the specific data that suggest the most interesting physics."
Future Results from CfR
The multi-resolution version of the code is nearly ready for production. This should allow researchers to resolve the black holes better, place the outer boundaries farther out (and the black holes farther apart), all using a fraction of the current memory requirements. This will translate into fewer processors needed, shorter queue times and more scientific throughput.
This initial code will then be interfaced with an evolution code for the full numerical simulation of binary black hole inspiral and merger. This evolution code is being developed by other members of Matzner's group, postdoc Andrea Nerozzi, and graduate student Paul Walter. This evolution effort is another significant user of TACC's resources and should enter production soon.
This research was supported through TACC allocations A-phaz and TG-PHY050037T, NSF grant PHY0354842, and by NASA grant NNG04GL37G. Portions of this work were conducted at the Laboratory for High Energy Astrophysics, NASA/Goddard Space Flight Center, Greenbelt Maryland, with support from the University Space Research Association. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing computational and storage resources that have contributed to the research results reported within this paper: http://www.tacc.utexas.edu
Source: Texas Advanced Computing Center
For the complete article, including additional graphics, visit http://www.tacc.utexas.edu/research/users/features/spin.php.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.