Nvidia Launches Spectrum-X Networking Platform for Generative AI

May 29, 2023

Nvidia launched a new Ethernet-based networking platform – the Nvidia Spectrum-X – that targets generative AI workloads. Based on tight coupling of the Nvi Read more…

Nvidia to Offer a ‘1 Exaflops’ AI Supercomputer with 256 Grace Hopper Superchips

May 28, 2023

We in HPC sometimes roll our eyes at the term “AI supercomputer,” but a new system from Nvidia might live up to the moniker: the DGX GH200 AI supercomputer. Read more…

Intel’s Habana Labs Takes on Prominent Role as Generative AI Surges

May 9, 2023

Intel acquired AI chipmaker Habana Labs just four years ago; now, the division is serving – per Habana COO Eitan Medina – as “effectively the center of ex Read more…

Q&A with HPE’s Trish Damkroger, an HPCwire Person to Watch in 2023

April 29, 2023

HPCwire 2023 Person to Watch Trish Damkroger is a long-time HPC enthusiast and seasoned exec hailing from a 17 year tenure at Lawrence Livermore Lab, followed by five years of leading HPC strategy at Intel, before one year ago making the move to HPE (where she is Chief Product Officer and Senior Vice President, HPC, AI & Labs). Read more…

Intel’s Server Chips Are ‘Lead Vehicles’ for Manufacturing Strategy

March 30, 2023

…But chipmaker still does not have an integrated product strategy, which puts the company behind AMD and Nvidia. Intel finally has a full complement of server and PC chips it will release in the coming years, which will determine whether it has regained its leadership in chip manufacturing. The chipmaker this week... Read more…

Is Fortran the Best Programming Language? Asking ChatGPT

March 23, 2023

I recently wrote about my experience with interviewing ChatGPT here. As promised, in this follow-on and conclusion of my interview, I focus on Fortran and other languages. All in good fun. I hope you enjoy the conclusion of my interview. After my programming language questions, I conclude with a few notes... Read more…

Five Trends Shaping HPC in 2023

March 6, 2023

Today’s HPC landscape is one of rapid growth, change, and evolution. The overall market has skyrocketed to $34.8 billion with expected developments fueling continued expansion. From pandemic aftereffects and growing cross-disciplinary work to increasing technical advancements, we have entered into a... Read more…

An Interview with ChatGPT and Eliza on AI and HPC Topics

February 13, 2023

Conversing with computers is inevitable, and our fascination with it and the famous Turing Test are nearly infinite and span people from all walks of life. I could not resist so I took a few minutes away from my time as an Intel oneAPI evangelist and did an interview with the most talked about chatbot of our day: ChatGPT (using their free plan).  I share the... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Powering Up Automotive Simulation: Why Migrating to the Cloud is a Game Changer

The increasing complexity of electric vehicles result in large and complex computational models for simulations that demand enormous compute resources. On-premises high-performance computing (HPC) clusters and computer-aided engineering (CAE) tools are commonly used but some limitations occur when the models are too big or when multiple iterations need to be done in a very short term, leading to a lack of available compute resources. In this hybrid approach, cloud computing offers a flexible and cost-effective alternative, allowing engineers to utilize the latest hardware and software on-demand. Ansys Gateway powered by AWS, a cloud-based simulation software platform, drives efficiencies in automotive engineering simulations. Complete Ansys simulation and CAE/CAD developments can be managed in the cloud with access to AWS’s latest hardware instances, providing significant runtime acceleration.

Two recent studies show how Ansys Gateway powered by AWS can balance run times and costs, making it a compelling solution for automotive development.

Download Now

Sponsored by ANSYS

Whitepaper

How to Save 80% with TotalCAE Managed On-prem Clusters and Cloud

Five Recommendations to Optimize Data Pipelines

When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.

With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.

To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.

Download Now

Sponsored by TotalCAE

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire