Spun out from Google last March, SandboxAQ is a fascinating, well-funded start-up targeting the intersection of AI and quantum technology. “As the world enters the third quantum revolution, AI …
It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and their like have steadily transformed …
January 26, 2022
Lenovo today announced TruScale High Performance Computing as a Service (HPCaaS), which it says will offer a “cloud-like experience” to HPC organizations of all sizes. The new HPC-as-a-Service is part of the TruScale portfolio that Lenovo launched in February 2019 and expanded last September. The aim, said Lenovo, is to enable end users... Read more…
November 26, 2021
Larry Smarr, founding director of Calit2 (now Distinguished Professor Emeritus at the University of California San Diego) and the first director of NCSA, is one of the seminal figures in the U.S. supercomputing community. What began as a personal drive, shared by others, to spur the creation of supercomputers in the U.S. for scientific use, later expanded into a... Read more…
November 10, 2021
Nvidia yesterday introduced Quantum-2, its new networking platform that features NDR InfiniBand (400 Gbps) and Bluefield-3 DPU (data processing unit) capabilities. The name is perhaps confusing – it’s not a quantum computing device and even Nvidia is getting into the true quantum computing market with its cuQuantum simulator. The name stems from the legacy line of Nvidia/Mellanox Quantum switches. That said, the new Quantum-2 platform specs are impressive. Jensen Huang, Nvidia CEO, introduced... Read more…
August 25, 2021
The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…
August 2, 2021
Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its Read more…
June 28, 2021
Dell Technologies today announced three expanded offerings in conjunction with the start of the ISC21 digital conference. The centerpiece is Omnia, new software Read more…
April 27, 2021
IBM plans to launch a new container-native software defined storage (SDS) solution, IBM Spectrum Fusion, in the second half of 2021, the company said today. It Read more…
April 12, 2021
Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computing. Nvidia is pitching the DPU as an active engine... Read more…
The increasing complexity of electric vehicles result in large and complex computational models for simulations that demand enormous compute resources. On-premises high-performance computing (HPC) clusters and computer-aided engineering (CAE) tools are commonly used but some limitations occur when the models are too big or when multiple iterations need to be done in a very short term, leading to a lack of available compute resources. In this hybrid approach, cloud computing offers a flexible and cost-effective alternative, allowing engineers to utilize the latest hardware and software on-demand. Ansys Gateway powered by AWS, a cloud-based simulation software platform, drives efficiencies in automotive engineering simulations. Complete Ansys simulation and CAE/CAD developments can be managed in the cloud with access to AWS’s latest hardware instances, providing significant runtime acceleration.
Two recent studies show how Ansys Gateway powered by AWS can balance run times and costs, making it a compelling solution for automotive development.
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.