March 13, 2023
After getting bruised in servers by AMD, Intel hopes to stop the bleeding in the server market with next year's chip offerings. The difference-making products will be Sierra Forest and Granite Rapids, which are due out in 2024, said Dave Zinsner, chief financial officer at Intel, last week at the Morgan Stanley Technology, Media and Telecom conference. Read more…
January 17, 2023
Dell's enterprise computing playbook has diversified in the last year, with new additions like quantum computing and high-performance computing-as-a-service to Read more…
February 6, 2020
Emerging AI workloads are propelling the booming Chinese server market, particularly those hosting programmable co-processors capable of supporting graphics chips used for parallel processing of machine learning tasks. The chief benefactor has been China’s server leader, Inspur. According to datacenter... Read more…
June 20, 2019
In the neck-and-neck horse race for HPC server market share, HPE has hung on to a slim, shrinking lead over Dell EMC – but if server and storage market shares Read more…
May 31, 2018
Nvidia’s updated server platform is intended as a “building block," in the reference design sense, to support AI training and inference along with HPC workloads such as simulations. The GPU vendor introduced its latest server platform dubbed HGX-2 on Wednesday during a company roadshow in Taipei, Taiwan. Read more…
March 11, 2016
Led by strong growth in China, the worldwide server market grew 5.2 percent to $15.3 billion in the fourth quarter of 2015, reported market watcher IDC this wee Read more…
November 18, 2015
Perhaps the most eye-popping numbers in IDC’s HPC market report presented yesterday at its annual SC15 breakfast were ROI figures IDC has been developing as p Read more…
October 21, 2015
Per a newly-inked contract with Penguin Computing, the Department of Energy’s National Nuclear Security Administration (NNSA) is set to receive its third join Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.