Visit additional Tabor Communication Publications
July 20, 2007
As we enter the petascale era, there will be a number of challenges to overcome before applications can truly take advantage of the enormous computational power that is coming available. One of the most pressing of these challenges will be to design software programs that map well to petascale architectures to allow the community to solve previously unattainable scientific and business problems.
For the last 20 years, performance improvements have been delivered by increasing processor frequencies. In the petascale era, processor frequencies will no longer increase due to fundamental atomic limits in our ability to shrink features on Silicon. Moore's Law will continue, but performance increases will now come through parallelism and petascale systems will deliver performance by deploying hundreds of thousands of individual processor cores. Multiple cores will be assembled into individual chips and tens of thousands of chips will then be assembled to deliver the petascale performance which Moore's law predicts to arrive in the next few years.
Programming approaches for multicore chips and parallel multicore systems are well understood. The programming challenge which arises however is very complex. When developing code for a single processor, a programmer is able to focus on the algorithms, and can, to first approximation, ignore the system architecture issues during program design. Compilers for single processor programming are well developed and mature and do a very good job at mapping a program properly to the system architecture on which that program is designed to run.
When programming for a parallel multicore process architecture, a programmer is forced to manage algorithmic and systems architectures together. The parallel system architecture requires that a programmer decide how to distribute data and work among the parallel processing elements in the architecture, at that same time as the algorithm is being designed. The parallel programmer needs to make many critical decisions which have huge impact on program performance and capability all through the design process. These decisions include items such as how many chips and cores will be required, how will data be distributed and moved across these elements, and how will work be distributed. On parallel systems, programming has changed from being a routine technical effort to being a creative art form.
The opportunity provided by leveraging these big parallel machines is enormous. It will be possible to answer some really hard questions in complex systems in all spheres of human activities. Examples include a better understanding of the processes that drive global warming, insight into how the world wide economy functions, and a full understanding of the chemical and biological processes that occur within the human body. Right now, we have the computing power to address these questions. We just don't have programs because they are so complex and so difficult to develop, test and validate.
On average, it takes two to four years to develop a programming code to simulate just one human protein. The challenge the scientific community now faces is finding the people who understand how to write complex programs for petascale architectures. There is an obvious Catch-22 involved: Until more of these programs start running on parallel machines and show results, it will be hard to justify the investment needed to fund the building of a whole infrastructure from scratch. This may include PhD programs at universities, recruitment of specialists, and the build-up of resources.
Although a major shift to parallelism is beginning, there is a high cost of entry. Right now, parallelism is in the early adopter phase. Before it shifts to the mainstream/commercial phase, the community will need to see a clear cost/benefit before it brings everyone along. In order to advance this effort in the U.S., the Scientific Discovery Advanced Computing Discovery (SciDAC) program is establishing nine Centers for Enabling Technologies to focus on specific challenges in petascale computing. These multidisciplinary teams are led by national laboratories and universities and focus on meeting the specific needs of SciDAC applications for researchers as they move toward petascale computing. These centers will specialize in applied mathematics, computer science, distributed computing and visualization, and will be closely tied to specific science application teams.
In addition to scientific questions, industry applications could help drive the development of the code and lead to mainstream adoption. One example is the energy and oil/petroleum industry. petascale computing may improve petroleum reserve management, nuclear reactor design, and nuclear fuel reprocessing. Another is the weather. As we need more precise, short-term weather prediction, microclimate modeling comes into play.
In the past, the computer science community tended to focus on the hardware and system software, but left the development of applications to others. The trend now is that programmers need to develop applications so that they are tightly coupled to the systems they will run on. One needs to design the program for the system. That's been the anathema for many years.
About the Author
Jim Sexton is the lead for Blue Gene Applications at IBM's IBM T. J. Watson Research Center in Yorktown Heights, NY. He received his Ph.D. in theoretical physics from Columbia University. He was a Research Fellow at Fermi National Accelerator Laboratory, then at the Institute for Advanced Study at Princeton University. Before joining the staff at the T. J. Watson Research Center, the was a professor at Trinity College in Dublin. His areas of interest include high performance computing, systems architectures, HPC systems software, theoretical physics and high energy theoretical physics.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.