March 25, 2023
In the bevy of news from Nvidia's GPU Technology Conference this week, another new system has come to light: Pegasus, which entered operations at the University Read more…
March 23, 2023
Chipmakers are tightlipped on China activities following a U.S. crackdown on hardware exports to the country. But Nvidia remains unfazed, and is doubling down o Read more…
March 21, 2023
Nvidia today announced general availability for its BlueField-3 data processing unit (DPU) along with impressive early deployments including Oracle Cloud Infras Read more…
March 21, 2023
Nvidia’s Hopper-generation H100 GPU is continuing its slow march toward “current-generation.” After Nvidia announced that the H100 was in “full producti Read more…
March 21, 2023
If you are a die-hard Nvidia loyalist, be ready to pay a fortune to use its AI factories in the cloud. Renting the GPU company's DGX Cloud, which is an all-inclusive AI supercomputer in the cloud, starts at $36,999 per instance for a month. The rental includes access to a cloud computer with eight Nvidia H100 or A100 GPUs and 640GB... Read more…
March 21, 2023
Nvidia's homegrown combined CPU and GPU offering, called Grace Hopper, has been slow out of the gate, but the chipmaker is finding a new use for it in simulating quantum computing. The Grace Hopper chip is being used alongside Quantum Machines' quantum hardware to facilitate quantum-classical computing. Read more…
March 10, 2023
The emergence of Covid in 2020 saw an explosion in HPC-powered health research. As the pandemic raged on, though, one limiting factor became increasingly clear: Read more…
February 3, 2023
The next-gen Hopper family may be on its way, but that isn’t stopping Nvidia’s popular A100 GPU from leading another benchmark on its way out. This time, it Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.