Visit additional Tabor Communication Publications
May 26, 2010
Hoisie, Kerbyson, Barker join PNNL to co-design high performance computer hardware and the applications that run on them
RICHLAND, Wash., May 26 -- Computer scientist Adolfy Hoisie has joined the Department of Energy's Pacific Northwest National Laboratory to lead PNNL's high performance computing activities. In one such activity, Hoisie will direct a group of scientists designing supercomputers and their software applications simultaneously -- so all the components of a supercomputer can be optimized and focused on one kind of problem.
As director of PNNL's Center for Advanced Architectures for Extreme Scale Computing, Hoisie is planning on tackling the kind of problems that can be found in a variety of scientific fields, from studying biological systems to understanding the electrical power grid. Some of these applications rely on the sheer computational power of supercomputers in the process of scientific discovery. In other areas, researchers amass so much data -- petabytes, a million billion times more bytes than in one character on a page -- that their supercomputers need more than just fast processors, they need to be able to shuttle that data around rapidly.
Most supercomputers, such as Cray's Jaguar at Oak Ridge National Laboratory in Tennessee, gain their fame due to how fast they perform calculations -- their processing speed. But a speedy processor won't matter if the computer can't move data between memory and the hard drive fast enough or if it can't handle rivers of data coming in from instruments taking measurements.
The data-intensive problems that PNNL researchers want to solve require a different emphasis in computational resources. But rather than build supercomputers and write software separately, Hoisie and two other computer scientists -- Darren Kerbyson and Kevin Barker -- will design the supercomputers and the applications that will run on them at the same time. Because most computers and software are designed independently, the scientists will also need to develop the tools to allow this co-design.
"The complexity of extreme-scale supercomputing systems and applications is now comparable to that of the physical simulations they perform. The science of systems and applications designed for optimal performance is a grand challenge for high performance computing research," said Moe Khaleel, director of Computational Sciences and Mathematics at PNNL. "PNNL will now be at the forefront of these endeavors."
In addition, the bigger in scale, the more power they consume. The team will also be looking at how performance and power intersect, and how they trade off against one another on extreme-scale systems and workloads. As part of the center, Kerbyson and Barker will collaborate with researchers at other national laboratories and universities.
The researchers come to PNNL from DOE's Los Alamos National Laboratory in Los Alamos, N.M. There, Hoisie earned an international reputation in performance analysis and engineering of large-scale parallel computers. Hoisie won the Gordon Bell Award from the Association for Computing Machinery in 1996, an honor given for work in parallel computing.
Kerbyson will lead basic research at the center and will also be the chief scientist for the PNNL's Extreme Scale Computing Initiative, which will be exploring how to tackle analysis of extremely large data sets. He specializes in modeling and analysis of how well software performs. He has a doctorate in computer science from the University of Warwick in the United Kingdom.
Barker has extensive experience developing tools for modeling software performance and for extreme scale hardware and software. He has also developed applications for parallel computing. He has a doctorate in computer science from the College of William and Mary in Williamsburg, Va.
Source: Pacific Northwest National Laboratory
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.