Visit additional Tabor Communication Publications
December 21, 2011
New five-year agreement for technology development includes HPC, data storage, cyber security, cloud computing, analytics, materials science and data sharing, and mobility
LOS ALAMOS, New Mexico, Dec. 21 — Los Alamos National Laboratory today announced the signing of a new Umbrella CRADA (Cooperative Research and Development Agreement) with EMC Corporation. Together, LANL and EMC will enhance, design, build, test, and deploy new cutting-edge technologies in an effort to meet some of the nation’s most difficult information technology challenges. The CRADA involves six general categories of technology development in which LANL and EMC will collaborate over the next five years, including high-performance computing (HPC), data storage, cyber security, data sharing and mobility, cloud computing, large-scale analytics, and materials science.
This first Project Task Statement (PTS) under the Umbrella CRADA is focused on support for the U.S. Department of Energy’s Exascale Initiative and other data intensive programs. The LANL and EMC collaboration for the Exascale initiative is aimed at boosting high-performance computing levels to the exaflops—a thousand-times faster than current petascale capabilities. The project involves design and development of an open-source, extremely scalable data-management middleware library called the Parallel Log Structured File System (PLFS), which will be used on a range of computing platforms from small clusters to the largest supercomputers in the world.
“This PLFS concept has been shown to improve data movement at extreme scales by several orders of magnitude,” said Gary Grider of LANL’s High Performance Computing Division. “Both EMC and LANL are interested in furthering this PLFS open source project to address the increasingly difficult data-management problems as the supercomputing world moves toward exascale-class computing.”
“We are thrilled to work with some of the nation’s greatest scientists at LANL, where the first petascale supercomputer was deployed, to collaboratively innovate in an effort to help maintain our nations’ leadership in extreme computing, on the road to exascale,” said Dr. Percy Tzelnic, senior vice president and EMC Fellow.
“The U.S. economy’s health is a national imperative, and strategic collaboration between the private and public sector—such as EMC and LANL—help LANL remain at the cutting edge of science and engineering,” said Dr. Alan Bishop, principal associate director for Science, Technology, and Engineering at LANL.
“Private and public collaboration will help overcome today’s technology challenges associated with the six categories outlined in this CRADA. Collaboration between public and private institutions—like LANL and EMC—will help the government to more cost-effectively address its needs, while delivering the industry at large with a better understanding of federal challenges to help build the right solutions,” said Nick Combs, EMC federal chief technology officer.
LANL has been at the forefront of HPC since the 1970s. LANL has developed extensive capabilities and numerous technologies that offer strong potential for collaboration in research and development in vast areas relating to data storage, data sharing, data analysis, cyber security, large-scale modeling,simulation and analysis, and materials science. EMC is a global leader in enabling businesses and service providers to transform their operations and deliver IT as a service. Through innovative products and services, EMC helps store, manage, protect and analyze information in a more agile, trusted and cost-efficient way.
About Los Alamos National Laboratory (www.lanl.gov)
Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration.
Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.
Source: Los Alamos National Laboratory
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.