September 22, 2021
The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…
May 12, 2020
Inspur, China’s server leader, is expanding its AI offerings based on Open Compute Project specifications, including an OCP “cloud optimized” server geare Read more…
February 6, 2020
Emerging AI workloads are propelling the booming Chinese server market, particularly those hosting programmable co-processors capable of supporting graphics chips used for parallel processing of machine learning tasks. The chief benefactor has been China’s server leader, Inspur. According to datacenter... Read more…
October 17, 2018
Three Chinese infrastructure vendors are embracing FPGA technology as a way of accelerating datacenter workloads. FPGA specialist Xilinx Inc. announced during a developer forum in Beijing this week that Alibaba Cloud, Huawei and server vendor Inspur are rolling out datacenter platforms based on the chip maker’s FPGA-as-a-service model. Among the datacenter workloads being targeted is AI inference, the partners said Tuesday. Read more…
April 24, 2017
A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…
January 17, 2017
Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, Read more…
April 22, 2016
ASC Student Supercomputer Challenge 2016 (ASC16) concluded in Wuhan on April 22. The co-host Huazhong University of Science and Technology won the championship Read more…
April 20, 2016
In the first day of HPL contest at the ASC Student Supercomputer Challenge (ASC16) on April 20, the team from Zhejiang University achieved a floating point comp Read more…
Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.
Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.
QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.