April 21, 2017
As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Cen Read more…
November 7, 2016
In advance of the SC16 expo in Salt Lake City next week, the OpenACC standards group today welcomed newest member NSSC-Wuxi and highlighted a number of important developments for the directives-based programming standard. Ahead of the announcement, HPCwire spoke with Michael Wolfe, technical director of OpenACC, and Duncan Poole, OpenACC president and director of platform alliances for accelerated computing at Nvidia. Read more…
September 13, 2016
A team of engineers from North Carolina State University and Intel have joined forces to address on-chip communications bottlenecks that hamper performance scal Read more…
August 23, 2016
In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that a Read more…
June 1, 2016
Mellanox today introduced the BlueField family of programmable processors, the first product technology based on its $811 million acquisition of fellow Iraeli high-tech company EZchip. The fact that the product announcement is taking place just three months after the completion of the purchase speaks to the strong synergies between EZchip and Mellanox, said Bob Doud, senior director of marketing at Mellanox. Read more…
February 4, 2016
Is the parallel everything era here? What happens when you can assume parallel cores? In the second half of our in-depth interview, Intel's James Reinders discu Read more…
November 12, 2015
Intel x86 processors continue to dominate HPC servers while the number of cores per processor also keeps rising, perhaps no surprises there. Also somewhat antic Read more…
November 11, 2015
This week during the lead up to SC15 the OpenACC standards group announced several new developments including the release and ratification of the 2.5 version of Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.