Google’s New AI-Focused ‘A3’ Supercomputer Has 26,000 GPUs

May 10, 2023

Cloud providers are building armies of GPUs to provide more AI firepower. Google is joining the gang with a new supercomputer that has almost 2.5 times the number of GPUs than the world’s third-fastest supercomputer called LUMI. Google announced an AI supercomputer with 26,000 GPUs at its developer conference on Wednesday. Read more…

Google AI Supercomputer Shows the Potential of Optical Interconnects

April 10, 2023

There are limits on the speed of how fast copper wires can move data between computers, and a transition to light speed will ultimately drive AI and high-performance computing forward. Every major chipmaker is in agreement that optical interconnects will be needed to reach zettascale computing in an energy-efficient way. That opinion was... Read more…

Google Claims Its TPU v4 Outperforms Nvidia A100

April 6, 2023

A new scientific paper from Google details the performance of its Cloud TPU v4 supercomputing platform, claiming it provides exascale performance for machine le Read more…

Google and Microsoft Set up AI Hardware Battle with Next-Generation Search

February 20, 2023

Microsoft and Google are driving a major computing shift by bringing AI to people via search engines, and one measure of success may come down to the hardware a Read more…

Google’s DeepMind Has a Long-term Goal of Artificial General Intelligence

September 14, 2022

When DeepMind, an Alphabet subsidiary, started off more than a decade ago, solving some most pressing research questions and problems with AI wasn’t at the top of the company’s mind. Instead, the company started off AI research with computer games. Every score and win was a measuring stick of success... Read more…

Shutterstock 1874021860

The Mainstreaming of MLPerf? Nvidia Dominates Training v2.0 but Challengers Are Rising

June 29, 2022

MLCommons’ latest MLPerf Training results (v2.0) issued today are broadly similar to v1.1 released last December. Nvidia still dominates, but less so (no gran Read more…

Google Cloud’s New TPU v4 ML Hub Packs 9 Exaflops of AI

May 16, 2022

Almost exactly a year ago, Google launched its Tensor Processing Unit (TPU) v4 chips at Google I/O 2021, promising twice the performance compared to the TPU v3. At the time, Google CEO Sundar Pichai said that Google’s datacenters would “soon have dozens of TPU v4 Pods, many of which will be... Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Powering Up Automotive Simulation: Why Migrating to the Cloud is a Game Changer

The increasing complexity of electric vehicles result in large and complex computational models for simulations that demand enormous compute resources. On-premises high-performance computing (HPC) clusters and computer-aided engineering (CAE) tools are commonly used but some limitations occur when the models are too big or when multiple iterations need to be done in a very short term, leading to a lack of available compute resources. In this hybrid approach, cloud computing offers a flexible and cost-effective alternative, allowing engineers to utilize the latest hardware and software on-demand. Ansys Gateway powered by AWS, a cloud-based simulation software platform, drives efficiencies in automotive engineering simulations. Complete Ansys simulation and CAE/CAD developments can be managed in the cloud with access to AWS’s latest hardware instances, providing significant runtime acceleration.

Two recent studies show how Ansys Gateway powered by AWS can balance run times and costs, making it a compelling solution for automotive development.

Download Now

Sponsored by ANSYS

Whitepaper

How to Save 80% with TotalCAE Managed On-prem Clusters and Cloud

Five Recommendations to Optimize Data Pipelines

When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.

With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.

To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.

Download Now

Sponsored by TotalCAE

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire