December 8, 2021
Being part of the SC Conference enhances your career – whether you are presenting new research, showcasing innovative work or practices, helping teach the nex Read more…
July 16, 2015
At IDC’s annual ISC breakfast there was a good deal more than market update numbers although there were plenty of those: “We try to track every server sold, Read more…
October 20, 2011
At SC11 in Seattle, the stage is set for data-intensive computing to steal the show. This year's theme correlates directly to the "big data" trend that is reshaping enterprise and scientific computing. We give an insider's view of some of the top sessions for the big data crowd and a broader sense of how this year's conference is shaping up overall. Read more…
May 26, 2011
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the NC State effort to overcome the memory limitations of multicore chips; the sale of the first-ever commercial quantum computing system; Cray's first GPU-accelerated machine; speedier machine learning algorithms; and the connection between shrinking budgets and increased reliance on modeling and simulation. Read more…
October 27, 2010
Languages like R and MATLAB, which were once unofficially reserved for technical computing domains are slowly finding their way into enterprises due to the rise in demand for large-scale data analytics. This demand is coupled with recent announcements about cloud-based ways to use these languages, opening new doors to access and use. Read more…
September 28, 2010
Truthy.indiana.edu exposes dirty politics on the Web. Read more…
April 16, 2010
Even computer-unsavvy scientists will be able to use NASA Earth Exchange to collaborate on modeling and analysis of large data sets. Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.