Visit additional Tabor Communication Publications
November 19, 2008
AUSTIN, Texas, Nov. 18 -- The National Nuclear Security Administration's (NNSA) Lawrence Livermore National Laboratory has teamed with 10 computing industry leaders to accelerate the development of powerful next-generation Linux clusters in a project dubbed Hyperion.
Hyperion brings together Dell, Intel, Supermicro, QLogic, Cisco, Mellanox, DDN, Sun, LSI and RedHat to create a large-scale testbed for high-performance computing technologies critical to NNSA's work to maintain the aging U.S. nuclear weapons stockpile without underground nuclear testing, and industry's ability to make petaFLOP/s (quadrillion floating operations per second) computing and storage more accessible for commerce, industry and research and development.
"Hyperion represents a new way of doing business. Collectively we are building a system none of us could have built individually," said Mark Seager, LLNL project leader. "The project will advance the state-of-the-art in a cost-effective manner, benefitting both end users, such as the national security labs, and the computing industry, which can expand the market with proven, easy to deploy large-and small-scale Linux clusters."
The goal of the project is to provide a development, testing and scaling environment for new cluster technologies and infrastructure critical to the mission requirements of NNSA's Advanced Simulation and Computing program. This includes testing new hardware and software technologies and forming long-term relationships to ensure continuity in the development of new technologies for ever-larger systems over the long haul.
Important technologies for scaling up computing clusters include Open Fabrics Enterprise Edition (OFED) InfiniBand Open Source software; Lustre Open Source Parallel File System; and Open Source Operating System Software and cluster tools used by the Tri-Lab Capacity Clusters, which serve researchers at Lawrence Livermore, Los Alamos and Sandia national labs. In addition, Hyperion will help lay the foundation for future petascale ASC computing platforms by facilitating the development of processors, memory, networks, storage and visualization.
The first half of Hyperion is now online and being used by the collaboration. When completed in March 2009, the Hyperion cluster, located at Livermore, will have at least 1,152 nodes with 9,216 cores; with about a 100 teraFLOP/s peak; more than 9 TB of memory; InfiniBand 4x DDR interconnect and access to more than 47 GB/s of RAID disk bandwidth. The Hyperion testbed includes two Storage Area Networks (SAN): one based on "Data Center Ethernet" and the other based on InfiniBand. Both SANs are currently deployed utilizing a unique TorMesh topology. This system is the largest testbed of its kind in the world and will provide the Hyperion collaborators with an unmatched opportunity to develop and test hardware and software technologies at unprecedented scale.
Hyperion helps fulfill U.S. Department of Energy/NNSA goals to provide state-of-the-art computing capabilities for national security; advance high-performance scientific computing for meeting energy, climate and other national challenges; enabling scientific discovery in basic science; and enhancing U.S. competitiveness in high performance computing.
About Lawrence Livermore National Laboratory
Founded in 1952, Lawrence Livermore National Laboratory is a national security laboratory, with a mission to ensure national security and apply science and technology to the important issues of our time. Lawrence Livermore National Laboratory is managed by Lawrence Livermore National Security, LLC for the U.S. Department of Energy's National Nuclear Security Administration.
Laboratory news releases and photos are also available at http://publicaffairs.llnl.gov.
Source: Lawrence Livermore National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.