Visit additional Tabor Communication Publications
December 21, 2011
New five-year agreement for technology development includes HPC, data storage, cyber security, cloud computing, analytics, materials science and data sharing, and mobility
LOS ALAMOS, New Mexico, Dec. 21 — Los Alamos National Laboratory today announced the signing of a new Umbrella CRADA (Cooperative Research and Development Agreement) with EMC Corporation. Together, LANL and EMC will enhance, design, build, test, and deploy new cutting-edge technologies in an effort to meet some of the nation’s most difficult information technology challenges. The CRADA involves six general categories of technology development in which LANL and EMC will collaborate over the next five years, including high-performance computing (HPC), data storage, cyber security, data sharing and mobility, cloud computing, large-scale analytics, and materials science.
This first Project Task Statement (PTS) under the Umbrella CRADA is focused on support for the U.S. Department of Energy’s Exascale Initiative and other data intensive programs. The LANL and EMC collaboration for the Exascale initiative is aimed at boosting high-performance computing levels to the exaflops—a thousand-times faster than current petascale capabilities. The project involves design and development of an open-source, extremely scalable data-management middleware library called the Parallel Log Structured File System (PLFS), which will be used on a range of computing platforms from small clusters to the largest supercomputers in the world.
“This PLFS concept has been shown to improve data movement at extreme scales by several orders of magnitude,” said Gary Grider of LANL’s High Performance Computing Division. “Both EMC and LANL are interested in furthering this PLFS open source project to address the increasingly difficult data-management problems as the supercomputing world moves toward exascale-class computing.”
“We are thrilled to work with some of the nation’s greatest scientists at LANL, where the first petascale supercomputer was deployed, to collaboratively innovate in an effort to help maintain our nations’ leadership in extreme computing, on the road to exascale,” said Dr. Percy Tzelnic, senior vice president and EMC Fellow.
“The U.S. economy’s health is a national imperative, and strategic collaboration between the private and public sector—such as EMC and LANL—help LANL remain at the cutting edge of science and engineering,” said Dr. Alan Bishop, principal associate director for Science, Technology, and Engineering at LANL.
“Private and public collaboration will help overcome today’s technology challenges associated with the six categories outlined in this CRADA. Collaboration between public and private institutions—like LANL and EMC—will help the government to more cost-effectively address its needs, while delivering the industry at large with a better understanding of federal challenges to help build the right solutions,” said Nick Combs, EMC federal chief technology officer.
LANL has been at the forefront of HPC since the 1970s. LANL has developed extensive capabilities and numerous technologies that offer strong potential for collaboration in research and development in vast areas relating to data storage, data sharing, data analysis, cyber security, large-scale modeling,simulation and analysis, and materials science. EMC is a global leader in enabling businesses and service providers to transform their operations and deliver IT as a service. Through innovative products and services, EMC helps store, manage, protect and analyze information in a more agile, trusted and cost-efficient way.
About Los Alamos National Laboratory (www.lanl.gov)
Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration.
Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.
Source: Los Alamos National Laboratory
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.