September 22, 2021
The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…
May 12, 2020
Inspur, China’s server leader, is expanding its AI offerings based on Open Compute Project specifications, including an OCP “cloud optimized” server geare Read more…
February 6, 2020
Emerging AI workloads are propelling the booming Chinese server market, particularly those hosting programmable co-processors capable of supporting graphics chips used for parallel processing of machine learning tasks. The chief benefactor has been China’s server leader, Inspur. According to datacenter... Read more…
October 17, 2018
Three Chinese infrastructure vendors are embracing FPGA technology as a way of accelerating datacenter workloads. FPGA specialist Xilinx Inc. announced during a developer forum in Beijing this week that Alibaba Cloud, Huawei and server vendor Inspur are rolling out datacenter platforms based on the chip maker’s FPGA-as-a-service model. Among the datacenter workloads being targeted is AI inference, the partners said Tuesday. Read more…
April 24, 2017
A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…
January 17, 2017
Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, Read more…
April 22, 2016
ASC Student Supercomputer Challenge 2016 (ASC16) concluded in Wuhan on April 22. The co-host Huazhong University of Science and Technology won the championship Read more…
April 20, 2016
In the first day of HPL contest at the ASC Student Supercomputer Challenge (ASC16) on April 20, the team from Zhejiang University achieved a floating point comp Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.