Visit additional Tabor Communication Publications
October 06, 2006
Oak Ridge National Laboratory's (ORNL's) Cray XT3 supercomputer, known as Jaguar, has become the fastest system in the world for running the Princeton Plasma Physics Laboratory's (PPPL's) flagship code for studying plasma microturbulence in fusion reactors.
PPPL's Stephane Ethier recently succeeded in running the Gyrokinetic Toroidal Code (GTC) on 10,386 of Jaguar's 10,424 processing cores, advancing 5.4 billion particles per step per second. That performance is a 13 percent improvement over the previous record of 4.8 billion particles per step per second set on Japan's Earth Simulator.
Ethier noted that GTC is one of only a few U.S. codes that have been benchmarked on the Earth Simulator. The Earth Simulator benchmark used up to 4096 processors.
Ethier said he is especially pleased with the efficiency at which the code was able to run on Jaguar's dual-core processors. "With regard to the increasing current emphasis on multi-core architectures," he said, "GTC has demonstrated better than 95 percent efficiency on the second processor of each dual-code node in these runs."
The Princeton researcher noted that the effort received substantial collaboration from staff at ORNL's National Center for Computational Sciences (NCCS).
"PPPL is most grateful to the staff of NCCS and especially to Scott Klasky and Don Maxwell for their extraordinary supporting efforts, which helped enable the timely achievement of these highly productive runs," Ethier said.
The milestone puts scientists a step closer to accurately simulating plasma behavior in fusion reactors such as the proposed ITER reactor, currently a top priority of the U.S. Department of Energy's Office of Science. The ITER project is geared toward reaching the fusion energy break-even point, getting more energy out of the reactor than goes into it.
PPPL chief scientist Williams Tang said the run on Jaguar was able to reach an extremely high statistical resolution, noting that the field of fusion simulation will continue to benefit as petascale computing systems become available.
"The ability to carry out such high-resolution calculations with associated very low noise levels enables better physics understanding of turbulent plasma behavior on realistic time scales characteristic of actual experimental observations," he said. "It holds great promise for accelerating the pace of greater scientific discovery at the petascale range and beyond."
Turbulence is believed to be the primary mechanism by which particles and energy leave the confining magnetic field of a doughnut-shaped fusion system, leading to a loss of energy in the system. According to Tang, the process of designing and operating a reactor such as ITER must take this phenomenon into account. GTC is a three-dimensional code developed to study the dynamics of turbulence and associated transport driven by variations of temperature and density within the system.
Source: Oak Ridge National Laboratory
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.