December 2, 2022
The race to ever-better flops-per-watt and power usage effectiveness (PUE) has, historically, dominated the conversation over sustainability in HPC – but at S Read more…
April 29, 2015
In what is being called an unprecedented upgrade, the NASA Center for Climate Simulation (NCCS) is tripling the peak performance of its Discover supercomputer t Read more…
August 16, 2013
When the Jaguar supercomputer at Oak Ridge National Laboratory morphed into Titan in 2012, it delivered a huge increase in computational power. Recently, the ORNL's parallel file system, called Spider, received a similar overhaul, and is in the process of emerging as Spider II. Read more…
July 26, 2010
A year ago, NOAA and DOE signed an agreement calling for closer cooperation between NOAA and Oak Ridge National Laboratory. Jim Rogers, director of operations for the National Center for Computational Sciences at ORNL, discusses the agreement and the goals for the Climate Modeling and Research System (CMRS), the initial supercomputer chosen for the collaborative work. Read more…
July 15, 2010
NASA Center for Climate Simulation doubles computational power with new Dell PowerEdge servers; Amazon introduces HPC-level computing on demand; and Carnegie Mellon announces $7 million initiative aimed at boosting computer science enrollment. We recap those stories and more in our weekly wrapup. Read more…
July 27, 2009
When it comes to scientific computing, the amount of science reaped from a simulation is largely determined by the speed and scalability of the software. Likewise, a code's speed is often at the mercy of its I/O performance. The more efficient the I/O, the faster the code and the more simulations can be run over a period of time. Read more…
July 9, 2009
Spider, the world's biggest Lustre-based, centerwide file system, has been fully tested to support Oak Ridge National Laboratory's new petascale Cray XT4/XT5 Jaguar supercomputer and is now offering early access to scientists. Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.