Visit additional Tabor Communication Publications
February 01, 2013
SAN DIEGO, Calif., Feb. 1 – Dr. Robert P. Harkness, a computational astrophysicist with the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, died on Sunday, January 27, after a brief bout with cancer. He was 56.
Harkness joined SDSC in 2001 as a member of SDSC’s Laboratory for Computational Astrophysics. With a total of more than 30 years’ experience in computational science and high-performance computing, he focused much of his research on the dynamics of exploding stars (novae and supernovae), but also specialized in writing new applications that allowed researchers worldwide to perform ever-larger computer simulations. A native of the United Kingdom, Harkness received his D. Phil at Oxford University in 1981.
As a computational astrophysicist, Harkness was at the forefront of new programming models and algorithms, while sharing his extensive experiences. At SDSC, Harkness was a leading developer of the Enzo code, an advanced, 3-D time-dependent code for computational astrophysics capable of simulating the evolution of the universe from the beginning using first principles, one of the most efficient high-performance codes in the astronomy community. Harkness also led the development of the petascale and hybrid versions of Enzo, and conducted the largest-scale Enzo for cosmology simulations on National Science Foundation (NSF) petascale-level compute systems.
“Robert’s extensive experience, knowledge, and eagerness to push the technological boundaries made him a stalwart of the entire high-performance computing community, not just SDSC,” said SDSC Director Michael Norman, Harkness’ colleague and head of SDSC’s Laboratory for Computational Astrophysics. “As one of the first researchers to use each new system as it came online, he ‘stress tested’ every major supercomputer, and ultimately, every supercomputer center in the U.S. benefited from his contributions. He was instrumental in the push from terascale to petascale computing, and was what I would call a supercomputer power-user – always climbing the power curve for Moore’s Law and massive parallelism.”
Harkness’ work allowed cosmologists to tackle computational problems that were 2000-fold larger and more complex than just 15 or so years before. By 2010, he was performing computational runs using approximately 93,000 cores on the Kraken Cray XT5 system at the National Institute for Computational Science (NICS), up from only 512 cores in 1994. Just weeks before his death, he was doing even larger simulations on the new 11-petaflops Blue Waters system at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. He received several allocation awards for computer time under the U.S. Department of Energy’s annual Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program to continue his research.
“As a pioneer of the use of supercomputers in astrophysics, specifically in radiative transfer and radiation hydrodynamics research, Robert was one of only a handful of people worldwide who could make the claim of having used virtually every single model of supercomputer, from the earliest Cray machines to the very first petascale machine,” added Norman. “He was one of a rare breed of scientists with in-depth knowledge of both domain science and supercomputing technologies.”
During the last two years, Harkness divided his research time between SDSC and the National Institute for Computational Science (NICS) at Oak Ridge National Laboratory in Tennessee, where he was working on advanced application development targeting prototype hardware for the Intel Xeon Phi coprocessor.
"Robert's work with porting and optimizing the Enzo cosmology code to the Intel Xeon Phi coprocessor played a major role in the overall success of the first two years of the Beacon Project, an ongoing research project exploring the impact of emerging computer architectures on computational science and engineering,” said Glenn Brook, director of the Application Acceleration Center of Excellence at NICS. “His exceptional understanding of supercomputing architectures and domain science, along with their interactions, allowed Robert to accomplish things that most would not even attempt, and his extensive experience was a tremendous asset to all those with whom he worked. His open, honest council will be deeply missed."
A series of videos describing Harkness’ research and computer simulations can be viewed at
Prior to joining SDSC, Harkness joined the Department of Astronomy at The University of Texas at Austin in 1984. Prior to the formation of the Texas Advanced Computing Center (TACC) in 2001, he worked at The University of Texas at Austin supporting previous supercomputing organizations, including the Center for High Performance Computing (CHPC) from 1986 to 1999. From 1986 until early 1990, he held a joint appointment with the university’s Astronomy Department.
In 1999, Harkness joined the scientific computing division of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. In addition, Harkness was a member of the National Science Foundation’s (NSF) National Resource Allocation Committee since December 1998.
In accordance with Harkness’ wishes, no formal service is planned. There will be an informal gathering open to all at the La Jolla Rock Bottom Restaurant beginning at 5 p.m. on February 16, 2013. Please contact Billiekai Boughton at email@example.com or 858 822-5450 to RSVP.
As an Organized Research Unit of UC San Diego, SDSC is considered a leader in data-intensive computing and all aspects of ‘big data’, which includes data integration, performance modeling, data mining, software development, workflow automation, and more. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. With its two newest supercomputer systems, Trestles and Gordon, SDSC is a partner in XSEDE (Extreme Science and Engineering Discovery Environment), the most advanced collection of integrated digital resources and services in the world.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.