Visit additional Tabor Communication Publications
April 01, 2011
The Department of Energy's Office of Science website currently offers as one of its feature articles a detailed look at how advances in high-performance computing have brought the power of simulation to bear on almost every facet of the scientific landscape. Dr. Steven E. Koonin, Under Secretary for Science, examines the link between computer simulation and scientific progress, citing a variety of real-world disciplines that have been enhanced by significant, sustained progress in the computational domain.
Koonin explains how the DOE makes supercomputing resources available for both scientific and industrial simulation endeavors. Last fall, Koonin's office held a Simulations Summit in Washington, which brought together more than 70 leaders from academia, industry, government, and national research laboratories to discuss how science and technology policies affect the nation's ability to compete on a global playing field. Keynote speaker Secretary Chu emphasized that "the DOE strategy should be to make simulation part of everyone's toolbox."
The Department of Energy's Office of Science (SC) is addressing that need by pushing the boundaries of computing and simulation to advance key science, math, and engineering challenges facing the nation. SC makes advanced supercomputers available and supports high-fidelity simulations that give scientists the power to analyze theories and validate experiments that are dangerous, expensive or impossible to conduct. Scientific simulations are used to understand everything from stellar explosion mechanisms to the quarks and gluons that make up a proton. They can tell us how blood flows through the body and how to make a more efficient combustion engine. And they can do much more.
Koonin goes on to list the some of the merits of a fully-supported national supercomputing strategy:
Improvements in high-performance computing benefit all computer users, not just those who use these world-class machines. Hardware innovation to drive down the energy consumption of processors and memory for exascale machines will be directly applicable to commodity electronics, making portable computers and smart phones much more powerful. Private sector consumers of high-performance computing use simulation to accelerate and reduce the cost of innovation in the design and manufacturing of their products, in applications stretching from advanced materials for engines and airplane wings to advanced chemicals for household products to the design of newer and faster consumer electronics.
More and more, scientific breakthroughs are predicated on continued, steady progress in computing. As Koonin notes, the US still leads the world in computing. Today's supercomputers are one trillion times faster than the their 1950s counterparts, and more than half of the TOP500 systems originate in the US. Koonin credits the actions of the Department of Energy for much of this progress, but warns that continued government support is necessary to sustain the current trajectory: "A golden moment has presented itself to continue U.S. leadership in simulations," Koonin remarks, "but concerted action and continued DOE leadership are necessary to turn this opportunity into reality."
Full story at Department of Energy's Office of Science
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.