Intel’s New Programmable Chips Next Year to Replace Aging Products

September 27, 2022

Intel shared its latest roadmap of programmable chips, and doesn't want to dig itself into a hole by following AMD's strategy in the area.  "We're thankfully not matching their strategy," said Shannon Poulin, corporate vice president for the datacenter and AI group at Intel, in response to a question posed by HPCwire during a press briefing. The updated roadmap pieces together Intel's strategy for FPGAs... Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

Xilinx Targets HPC and Datacenter with New Alveo U55C FPGA-Card

November 15, 2021

At SC21 today, Xilinx launched its most powerful FPGA-based accelerator card – the Alveo U55C – specifically targeting HPC workloads and the datacenter. FPGAs (field programmable gate arrays) have a long productive history as customized accelerator chips used in many embedded applications. It’s only in the last few years that FPGA suppliers have begun... Read more…

Xilinx Expands Versal Chip Family With 7 New Versal AI Edge Chips

June 10, 2021

FPGA chip vendor Xilinx has been busy over the last several years cranking out its Versal AI Core, Versal Premium and Versal Prime chip families to fill customer compute needs in the cloud, datacenters, networks and more. Now Xilinx is expanding its reach to the booming edge... Read more…

Intel Launches Stratix 10 NX FPGAs Targeting AI Workloads

June 18, 2020

Intel today introduced its first AI-optimized FPGA – the Stratix 10 NX – which features expanded AI Tensor blocks (30 multipliers and 30 accumulators), inte Read more…

Micron Accelerator Bumps Up Memory Bandwidth

February 26, 2020

Deep learning accelerators based on chip architectures coupled with high-bandwidth memory are emerging to enable near real-time processing of machine learning a Read more…

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programm Read more…

What’s New in HPC Research: Transfer Learning, Radio Astronomy, FPGAs and More

September 12, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programm Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Powering Up Automotive Simulation: Why Migrating to the Cloud is a Game Changer

The increasing complexity of electric vehicles result in large and complex computational models for simulations that demand enormous compute resources. On-premises high-performance computing (HPC) clusters and computer-aided engineering (CAE) tools are commonly used but some limitations occur when the models are too big or when multiple iterations need to be done in a very short term, leading to a lack of available compute resources. In this hybrid approach, cloud computing offers a flexible and cost-effective alternative, allowing engineers to utilize the latest hardware and software on-demand. Ansys Gateway powered by AWS, a cloud-based simulation software platform, drives efficiencies in automotive engineering simulations. Complete Ansys simulation and CAE/CAD developments can be managed in the cloud with access to AWS’s latest hardware instances, providing significant runtime acceleration.

Two recent studies show how Ansys Gateway powered by AWS can balance run times and costs, making it a compelling solution for automotive development.

Download Now

Sponsored by ANSYS

Whitepaper

How to Save 80% with TotalCAE Managed On-prem Clusters and Cloud

Five Recommendations to Optimize Data Pipelines

When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.

With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.

To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.

Download Now

Sponsored by TotalCAE

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire