November 19, 2009
IT professionals are constantly being challenged to manage exponential growth that has reached petabyte levels. As pressures increase on IT to deliver even-higher levels of productivity and efficiency, a new generation file system standard will be required to maximize utilization of powerful server and cluster resources while minimizing management overhead. Read more…
November 18, 2009
Mitrionics has begun work on an experimental compiler that aims to make parallel programming architecture-agnostic. We asked Stefan Möhl, Mitrionics' chief science officer and co-founder, what's behind the new technology and what prompted the decision to expand beyond their FPGA roots. Read more…
November 18, 2009
Buying Teslas by the bushel. Read more…
November 17, 2009
The opening address of the Supercomputing Conference had a surreal quality to it in more ways than one. Between talking avatars, physics-simulated sound, and a Larrabee demo running HPC-type codes, it was hard to separate reality from fantasy. Read more…
November 17, 2009
Jaguar leaves Roadrunner in the dust. Read more…
November 16, 2009
Never short on opinions, especially when it comes to high performance computing, Convey Computer Co-Founder Steve Wallach talked to HPCwire about the future of HPC and how lessons from the past can point the way for the future. Read more…
November 16, 2009
HPC storage vendor DataDirect Networks will soon offer integrated clustered file system support in its Storage Fusion Architecture product line. The idea is to drastically reduce the amount of storage switches and file system servers, and thus the cost and complexity of supercomputer-sized file storage. Read more…
November 16, 2009
After what may be the longest development cycle ever for a supercomputer, SGI has unveiled the first commercial implementation of its Ultraviolet architecture. The company first announced "Project Ultraviolet" at SC03. Now six years later, it has launched Altix UV, the company's first scale-up HPC system based on x86 technology. Read more…
November 15, 2009
We have developed something of a tradition at HPCwire in the weeks leading up to each year's SC conference; we interview the chairman of the OpenFabrics Alliance (OFA). Jim Ryan of Intel has been the OFA's chair all these years, and our annual interview with Jim was as interesting as ever. Read more…
November 15, 2009
NVIDIA has announced the first Fermi GPU products here at the Supercomputing Conference (SC09) in Portland, Oregon, where thousands of attendees will get a chance to see the company's next-generation chip in action. The GPUs will first touch down in NVIDIA's new Tesla 20-series products aimed at HPC workstations and servers. Read more…
November 6, 2009
SC09 General Chair Wilf Pinfold shares his thoughts on organizing the world's largest Supercomputing event, examines this year's big conference themes and gives his take on the state of the industry and how that reflects on the conference. Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.