Shutterstock 1874021860

The Mainstreaming of MLPerf? Nvidia Dominates Training v2.0 but Challengers Are Rising

June 29, 2022

MLCommons’ latest MLPerf Training results (v2.0) issued today are broadly similar to v1.1 released last December. Nvidia still dominates, but less so (no gran Read more…

ACES ‘Composable’ Supercomputer Gets Ready for Phase One Use

April 4, 2022

Later this spring, ACES – the new ‘composable’ supercomputer being stood up at Texas A&M University – will begin granting Phase One access to early Read more…

Graphcore Launches Wafer-on-Wafer ‘Bow’ IPU

March 3, 2022

Graphcore introduced its AI-focused, PCIe-based Intelligent Processing Units (IPUs) six years ago. Since then, the company has done anything but slow down, announcing a second generation of IPUs in 2020 and, over the years, larger and larger IPU-based “IPU-POD” systems — most recently the IPU-POD128 and the IPU-POD256, both announced just a few months... Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies Read more…

Graphcore Introduces Larger-Than-Ever IPU-Based Pods

October 22, 2021

After launching its second-generation intelligence processing units (IPUs) in 2020, four years after emerging from stealth, Graphcore is now boosting its produc Read more…

Three Universities Team for NSF-Funded ‘ACES’ Reconfigurable Supercomputer Prototype

September 23, 2021

As Moore’s law slows, HPC developers are increasingly looking for speed gains in specialized code and specialized hardware – but this specialization, in turn, can make testing and deploying code trickier than ever. Now, researchers from Texas A&M University, the University of Illinois at Urbana... Read more…

Latest MLPerf Results: Nvidia Shines but Intel, Graphcore, Google Increase Their Presence

June 30, 2021

While Nvidia (again) dominated the latest round of MLPerf training benchmark results, the range of participants expanded. Notably, Google’s forthcoming TPU v4 Read more…

AI Silicon Startup Graphcore Launches Channel Partner Program

September 23, 2020

AI compute platform vendor Graphcore has launched its first formal global channel partner program to promote and boost the sales of its AI processors and blade computing products. The formalized, all-new Graphcore Elite Partner Program follows the company’s past history of working with several... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Powering Up Automotive Simulation: Why Migrating to the Cloud is a Game Changer

The increasing complexity of electric vehicles result in large and complex computational models for simulations that demand enormous compute resources. On-premises high-performance computing (HPC) clusters and computer-aided engineering (CAE) tools are commonly used but some limitations occur when the models are too big or when multiple iterations need to be done in a very short term, leading to a lack of available compute resources. In this hybrid approach, cloud computing offers a flexible and cost-effective alternative, allowing engineers to utilize the latest hardware and software on-demand. Ansys Gateway powered by AWS, a cloud-based simulation software platform, drives efficiencies in automotive engineering simulations. Complete Ansys simulation and CAE/CAD developments can be managed in the cloud with access to AWS’s latest hardware instances, providing significant runtime acceleration.

Two recent studies show how Ansys Gateway powered by AWS can balance run times and costs, making it a compelling solution for automotive development.

Download Now

Sponsored by ANSYS

Whitepaper

How to Save 80% with TotalCAE Managed On-prem Clusters and Cloud

Five Recommendations to Optimize Data Pipelines

When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.

With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.

To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.

Download Now

Sponsored by TotalCAE

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire