November 23, 2022
With SC22 in the rearview mirror, it’s time to get back to the 2022 Great American Supercomputing Road Trip. To refresh everyone’s memory, I jumped in the c Read more…
October 22, 2022
When complete, the Crossroads supercomputer at Los Alamos National Laboratory (LANL) is expected to deliver quadruple the performance of LANL’s already-powerful Trinity supercomputer (20.16 Linpack petaflops). Now, the first phase of Crossroads – called “Tycho” – has been successfully installed at the lab, with the... Read more…
March 17, 2022
In late 2020, Los Alamos National Laboratory (LANL) — which operates under the Department of Energy’s National Nuclear Security Administration (NNSA) — co Read more…
April 5, 2021
Tape storage has dominated high-volume data storage for many decades, and with data production continuing to grow exponentially, researchers are eager to find a Read more…
December 17, 2020
Los Alamos National Laboratory (LANL), which operates under the purview of the National Nuclear Security Administration (NNSA), is home to a variety of supercom Read more…
December 8, 2020
Well before COVID-19 struck New Mexico, New Mexico was striking COVID-19. Los Alamos National Laboratory (LANL) began its research on COVID-19 in late January, Read more…
October 7, 2020
Short coherence times currently limit the size of problems that can be addressed on today’s so-called Noisy Intermediate-Scale Quantum (NISQ) computers. Resea Read more…
April 23, 2019
Simulating large biomolecules has long been challenging. Now, researchers from Los Alamos National Laboratory (LANL), RIKEN Center for Computational Science in Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.