November 21, 2018
Nvidia’s latest GPU, the T4, continues to rack up wins. Unlike the V100, which is geared for traditional HPC scale-up environments including model training, t Read more…
November 14, 2018
For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…
November 6, 2018
As part of its ongoing Computational Science Initiative (CSI) targeting data-intensive science and AI workloads, Brookhaven National Laboratory added a Nvidia DGX-2 supercomputer last month. The move is perhaps not surprising given the gush of data Brookhaven deals with. Read more…
August 29, 2018
Microsoft Azure continued to beef up support for HPC and advanced scale workloads today announcing general availability of CycleCloud – its HPC cloud orchestr Read more…
July 25, 2017
Earlier this month, the National Science Foundation issued a $1 million grant to Larry Smarr, director of Calit2, and a group of his colleagues to create a comm Read more…
May 10, 2017
Hewlett Packard Enterprise (HPE) today launched a new liquid cooled GPU-driven Apollo platform based on SGI ICE architecture, a new collaboration with NVIDIA, a Read more…
April 25, 2017
IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Using IBM Minsk Read more…
March 6, 2017
AI-based poker playing programs have been upping the ante for lowly humans. Notably several algorithms from Carnegie Mellon University (e.g. Libratus, Claudico, Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.