October 19, 2022
Cerebras Systems has secured another U.S. government win for its wafer scale engine chip – which is considered the largest chip in the world. The company's chip technology will be part of a research project sponsored by the National Nuclear Security Administration to find... Read more…
August 6, 2022
Lawrence Livermore National Laboratory (LLNL) is one of the laboratories that operates under the auspices of the National Nuclear Security Administration (NNSA), which manages the United States’ stockpile of nuclear weapons. Amid major efforts to modernize that stockpile, LLNL has announced that researchers from its own Energetic Materials Center... Read more…
May 4, 2022
Intel spinoff Cornelis Networks, custodian and developer of the Omni-Path networking portfolio, is now closer to reaching its next-gen networking roadmap targets thanks to an R&D contract with the Department of Energy’s National Nuclear Security Administration (NNSA). The contract is valued at $18 million. The Next-Generation High Performance Computing Network (NG-HPCN) project brings together NNSA labs and... Read more…
October 1, 2020
The three national laboratories (Lawrence Livermore, Los Alamos and Sandia) that support the National Nuclear Security Administration (NNSA) occupy a strange role in the landscape of government-funded research and supercomputing. The NNSA manages the military applications of nuclear science... Read more…
October 21, 2015
Per a newly-inked contract with Penguin Computing, the Department of Energy’s National Nuclear Security Administration (NNSA) is set to receive its third join Read more…
July 10, 2014
Note - 7:32 p.m. Eastern: We have full details from Los Alamos about the system in a detailed update article. Cray has been granted one of the largest award Read more…
March 20, 2013
LLNL researchers have successfully harnessed all 1,572,864 of Sequoia's cores for one impressive simulation. Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.