Visit additional Tabor Communication Publications
September 22, 2010
Contract focuses on extending Lustre monitoring functionality and testing PCI-based solid state devices in LLNL's large test cluster Hyperion
DANVILLE, Calif., Sept. 22 -- Whamcloud, a venture-backed company formed from worldwide high-performance computing (HPC) storage industry veterans, announced today the signing of its first national laboratory customer, Lawrence Livermore National Laboratory (LLNL).
Whamcloud and LLNL will work together on the Lustre Monitoring Tool (LMT), a debugging tool for administrators in supercomputing environments, as well as on Lustre performance testing utilizing Solid State Devices (SSDs) on LLNL's large test cluster Hyperion Data Intensive Testbed. That testbed has been measured at 46M IOPS @ 4K blocks and over 500GB/s for 1MB blocks with local files systems. The impact of SSDs on performance on a wide scale is broadly anticipated to be significant. These at-scale tests will be a first with the Lustre global parallel file system.
"We've been long-time supporters of Lustre and are extremely happy to see that Whamcloud is continuing to support the Lustre on Linux community," said Mark Seager, Livermore's principal investigator for ASCI platforms. "LLNL continues to be at the forefront of developing HPC technologies and pushing the envelope for data intensive computing environments that support our national security mission."
"We're excited to be working with Livermore, a preeminent supercomputing site, as this will directly benefit the Lustre community. It's an opportunity to extend Lustre functionality for HPC administrators with an incredibly knowledgeable and talented partner," said Brent Gorda, CEO of Whamcloud. "No other at-scale tests have been performed like this, and the results will be made available for everyone to use."
The LMT watches system hardware and any current processes and presents it to the administrator in an easily digestible format. This is especially useful in debugging. Whamcloud will work with LLNL to extend the LMT to Lustre 2.0, the most recent version of Lustre. These improvements will be included in the Whamcloud testing and build process and will be made available worldwide.
LLNL's large test cluster Hyperion has recently added a large amount of PCI-based SSDs for storage to create a data intensive testbed. SSDs are seen as the future of storage in HPC. In this challenging test environment with minimal latency and high bandwidth rates between devices, Whamcloud and LLNL will run a wide range of performance tests and will incorporate improvements directly into Lustre 2.0. This will help extend Lustre onto large petaflop systems like the upcoming 20 petaFLOP/s Sequoia system at LLNL, a third generation BlueGene system, which will require huge data bandwidths.
About Lawrence Livermore National Laboratory
Founded in 1952, Lawrence Livermore National Laboratory is a national security laboratory, with a mission to ensure national security and apply science and technology to the important issues of our time. Lawrence Livermore National Laboratory is managed by Lawrence Livermore National Security, LLC for the U.S. Department of Energy's National Nuclear Security Administration.
Whamcloud is a venture-backed company formed from worldwide high-performance computing (HPC) storage industry veterans focused on enabling unprecedented application scaling and information insight through the evolution of HPC storage technologies in collaboration with the world's most scalable computing centers. http://www.whamcloud.com.
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.