July 1, 2021
Storage supplier DDN today made several announcements across its product line. Foremost was introduction of EXAScaler 6, the latest version of its Lustre-based Read more…
March 23, 2021
Sweden’s National Supercomputer Center today announced the launch of Berzelius, a supercomputer based on Nvidia’s DGX SuperPOD architecture and capable of d Read more…
December 23, 2020
It was not a typical year for supercomputing in the sciences. When the pandemic struck, virtually every research supercomputer in the world pivoted much of its Read more…
November 25, 2020
Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering... Read more…
November 16, 2020
Currently, there’s a lot on DDN’s plate as the long-time leader in HPC storage integrates recent acquisitions and strives to become a comprehensive HPC-plus-enterprise storage technology supplier. SC20 is providing a showcase for those efforts as DDN rolls out product updates, impressive... Read more…
November 9, 2020
Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…
October 20, 2020
DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…
October 13, 2020
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) engages in a wide variety of research and development projects to support Japan’s maritime Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.