Visit additional Tabor Communication Publications
August 07, 2012
LIVERMORE, Calif., Aug. 6 -- The U.S. Department of Energy's Lawrence Livermore National Laboratory issued the following news release:
Under an initiative called FastForward, the Department of Energy (DOE), Office of Science and the National Nuclear Security Administration (NNSA) have awarded $62 million in research and development (R&D) contracts to five leading companies in high performance computing to accelerate the development of next-generation supercomputers vital to national defense, scientific research, energy security, and the nation's economic competitiveness.
AMD, IBM, Intel, Nvidia and Whamcloud received awards to advance "extreme scale" computing technology with the goal of funding innovative R&D of critical technologies needed to deliver next generation capabilities within a reasonable energy footprint. DOE missions require exascale systems that operate at quintillions of floating point operations per second. Such systems would be 1,000 times faster than a 1-petaflop (quadrillion floating point operations per second) supercomputer. Currently, the world's fastest supercomputer -- the IBM BlueGene/Q Sequoia system at Lawrence Livermore National Laboratory (LLNL) -- clocks in at 16.3 petaflops.
"The challenge is to deliver 1,000 times the performance of today's computers with only a fraction more of the system's energy consumption and space requirements," said William Harrod, division director of research in DOE Office of Science's Advanced Scientific Computing Research program.
Contract awards were in three high performance computing (HPC) technology areas: processors, memory, and storage and input/output (I/O) -- the communication between computer processing systems and outside networks. The total value of the contracts is $62.5 million and covers a two-year period of performance.
The FastForward program, funded by DOE's Office of Science and NNSA, is managed by LLNL on behalf of seven national laboratories including: Lawrence Berkeley, Los Alamos, Sandia, Oak Ridge, Argonne and Pacific Northwest. Technical experts from the participating national laboratories evaluated and helped select the proposals and will work with selected vendors on co-design.
"Exascale computing will be required to fully assess the performance of our nation's nuclear stockpile in all foreseeable situations without returning to nuclear testing," said Bob Meisner, head of NNSA's Advanced Simulation and Computing (ASC) program. "The insight that comes from simulations is also vital to addressing nonproliferation and counterterrorism issues, as well as informing other national security decisions."
The FastForward initiative is intended to speed up and influence the development of technologies companies are pursuing for commercialization to ensure these products include features DOE Science and NNSA laboratories require for research.
"Recognizing that the broader computing market will drive innovation in a direction that may not meet DOE mission needs in national security and science, we need to ensure that exascale systems will meet the extreme requirements in computation, data movement and reliability that DOE applications require," Harrod said.
Under the contract awards, AMD is working on processors and memory for extreme systems, IBM also is working on memory for extreme systems, Intel Federal is working on energy efficient processors and memory architectures, Nvidia is working on processor architecture for exascale computing at low power and Whamcloud is leading a group working on storage and I/O.
In an era of increasing global competition in HPC, the development of exascale computing capabilities is widely seen as a key to sustaining the innovation edge in the science and technology that underpin national and economic security.
DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, provides open scientific user facilities -- including some of the world's most powerful supercomputers -- as a resource for the nation, and is working to address some of the most pressing challenges of our time.
Source: Lawrence Livermore National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.