Shutterstock 752018914

Intel’s Strategy to Free Server Capacity by Pushing AI Inference to PCs

January 18, 2024

AI is here to stay and is becoming a larger part of the workload processed on servers and PCs. That's why Nvidia is seeing success as a chipmaker, and there is excitement around large language models such as Meta's open-source Llama. An eager audience wants to get a handle on such models... Read more…

Finding Opportunity in the High-Growth ‘AI Market’

December 6, 2023

 “What’s the size of the AI market?” It’s a totally normal question for anyone to ask me. After all, I’m an analyst, and my company, Intersect360 Research, specializes in scalable, high-performance datacenter segments, such as AI, HPC, and Hyperscale. Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

IBM’s Prototype Low-Power 7nm AI Chip Offers ‘Precision Scaling’

February 23, 2021

IBM has released details of a prototype AI chip geared toward low-precision training and inference across different AI model types while retaining model quality within AI applications. In a paper delivered during this year’s International Solid-State Circuits Virtual Conference, IBM... Read more…

Photonics Processor Aimed at AI Inference

August 18, 2020

Silicon photonics is exhibiting greater innovation as requirements grow to enable faster, lower-power chip interconnects for traditionally power-hungry applicat Read more…

Nvidia Nabs #7 Spot on Top500 with Selene, Launches A100 PCIe Cards

June 22, 2020

Nvidia unveiled its Selene AI supercomputer today in tandem with the updated listing of world’s fastest computers. Nvidia also introduced the PCIe form factor of the Ampere-based A100 GPU. Nvidia’s new internal AI supercomputer, Selene, joins the upper echelon of the 55th Top500’s ranks and breaks an energy-efficiency... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

How Direct Liquid Cooling Improves Data Center Energy Efficiency

Data centers are experiencing increasing power consumption, space constraints and cooling demands due to the unprecedented computing power required by today’s chips and servers. HVAC cooling systems consume approximately 40% of a data center’s electricity. These systems traditionally use air conditioning, air handling and fans to cool the data center facility and IT equipment, ultimately resulting in high energy consumption and high carbon emissions. Data centers are moving to direct liquid cooled (DLC) systems to improve cooling efficiency thus lowering their PUE, operating expenses (OPEX) and carbon footprint.

This paper describes how CoolIT Systems (CoolIT) meets the need for improved energy efficiency in data centers and includes case studies that show how CoolIT’s DLC solutions improve energy efficiency, increase rack density, lower OPEX, and enable sustainability programs. CoolIT is the global market and innovation leader in scalable DLC solutions for the world’s most demanding computing environments. CoolIT’s end-to-end solutions meet the rising demand in cooling and the rising demand for energy efficiency.

Download Now

Sponsored by CoolIT

Whitepaper

Transforming Industrial and Automotive Manufacturing

Divergent Technologies developed a digital production system that can revolutionize automotive and industrial scale manufacturing. Divergent uses new manufacturing solutions and their Divergent Adaptive Production System (DAPS™) software to make vehicle manufacturing more efficient, less costly and decrease manufacturing waste by replacing existing design and production processes.

Divergent initially used on-premises workstations to run HPC simulations but faced challenges because their workstations could not achieve fast enough simulation times. Divergent also needed to free staff from managing the HPC system, CAE integration and IT update tasks.

Download Now

Sponsored by TotalCAE

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire