Quantum Monte Carlo at Exascale Could Be Key to Finding New Semiconductor Materials

September 27, 2021

Researchers are urgently trying to identify possible materials to replace silicon-based semiconductors. The processing power in modern computers continues to in Read more…

ISC 2021 Keynote: Thomas Sterling on Urgent Computing, Big Machines, China Speculation

July 1, 2021

In a somewhat shortened version of his annual ISC keynote surveying the HPC landscape Thomas Sterling, lauded the community’s effort in bringing HPC to bear in the fight against the pandemic, welcomed the start of the exascale – if not yet exaflops – era with quick tour of some big machines, speculated a little on what China may be planning, and paid tribute to new and ongoing efforts to bring fresh talent into HPC. Sterling is a longtime HPC leader... Read more…

AI Systems Summit Keynote: Brace for System Level Heterogeneity Says de Supinski

April 1, 2021

Heterogeneous computing has quickly come to mean packing a couple of CPUs and one-or-many accelerators, mostly GPUs, onto the same node. Today, a one-such-node system has become the standard AI server offered by dozens of vendors. This is not to diminish the many advances... Read more…

ORNL’s Jeffrey Vetter on How IRIS Runtime will Help Deal with Extreme Heterogeneity

March 3, 2021

Jeffrey Vetter is a familiar figure in HPC. Last year he became one of the new section heads in a reorganization at Oak Ridge National Laboratory. He had been f Read more…

Let’s Talk Exascale: ECP Leadership Discusses Project Highlights, Challenges, and the Expected Impact of Exascale Computing

August 19, 2020

In this special episode of the Let’s Talk Exascale podcast from the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP), the members of the Read more…

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

DEEP-EST Stands up Cluster Module at Jülich Supercomputing Centre

May 6, 2019

European exascale efforts continued to advance with the recent standing up of the “Cluster Module” (CM) at the Jülich Supercomputing Centre (JSC). CM is on Read more…

Congress Passes DOE Research and Innovation Act

October 9, 2018

Congress recently passed the Department of Energy Research and Innovation Act (H.R.589) which essentially authorizes many existing Department of Energy activities. It also emphasizes efforts to ease and accelerate technology transfer to the private sector. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Powering Up Automotive Simulation: Why Migrating to the Cloud is a Game Changer

The increasing complexity of electric vehicles result in large and complex computational models for simulations that demand enormous compute resources. On-premises high-performance computing (HPC) clusters and computer-aided engineering (CAE) tools are commonly used but some limitations occur when the models are too big or when multiple iterations need to be done in a very short term, leading to a lack of available compute resources. In this hybrid approach, cloud computing offers a flexible and cost-effective alternative, allowing engineers to utilize the latest hardware and software on-demand. Ansys Gateway powered by AWS, a cloud-based simulation software platform, drives efficiencies in automotive engineering simulations. Complete Ansys simulation and CAE/CAD developments can be managed in the cloud with access to AWS’s latest hardware instances, providing significant runtime acceleration.

Two recent studies show how Ansys Gateway powered by AWS can balance run times and costs, making it a compelling solution for automotive development.

Download Now

Sponsored by ANSYS

Whitepaper

How to Save 80% with TotalCAE Managed On-prem Clusters and Cloud

Five Recommendations to Optimize Data Pipelines

When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.

With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.

To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.

Download Now

Sponsored by TotalCAE

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire