August 13, 2014
Technology, like other facets of life, commonly experiences cycles of rapid change followed by periods of relative stability. Computing has entered a stage of i Read more…
November 26, 2013
Researchers at Georgia Institute of Technology and University of Southern California will receive nearly $2 million in federal funding for the creation of tools Read more…
November 11, 2013
The global distributed computing system known as the Worldwide LHC Computing Grid (WLCG) brings together resources from more than 150 computing centers in near Read more…
June 17, 2013
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj.... Read more…
June 2, 2013
With help from a draft report from Jack Dongarra of the University of Tennessee and Oak Ridge National Laboratory, who also spearheads the process of verifying the top of the pack super, we are able to share the full processor, Xeon Phi coprocessor, custom interconnect, storage and memory, as well as power and cooling information. The supercomputer out of China will be... Read more…
May 1, 2013
This week we're at the IDC User Forum in Tucson, staying cool amidst some heated talks about which processor, coprocessor and accelerator approaches are going to push into the lead in the next few years. To take this pulse, we sat down with IDC's Steve Conway to talk about some general trends that are a tall drink of water for a few key vendors, including Intel, NVIDIA..... Read more…
March 21, 2013
The top research stories of the week include an evaluation of sparse matrix multiplication performance on Xeon Phi versus four other architectures; a survey of HPC energy efficiency; performance modeling of OpenMP, MPI and hybrid scientific applications using weak scaling; an exploration of anywhere, anytime cluster monitoring; and a framework for data-intensive cloud storage. Read more…
March 21, 2013
Penguin Computing keeps finding increasing demand for servers that go heavy on the GPUs (or other coprocessors). Based on feedback from one such customer, it has designed the Relion 2808GT server, which it says now has the highest compute density of any server on the market. Read more…
The increasing complexity of electric vehicles result in large and complex computational models for simulations that demand enormous compute resources. On-premises high-performance computing (HPC) clusters and computer-aided engineering (CAE) tools are commonly used but some limitations occur when the models are too big or when multiple iterations need to be done in a very short term, leading to a lack of available compute resources. In this hybrid approach, cloud computing offers a flexible and cost-effective alternative, allowing engineers to utilize the latest hardware and software on-demand. Ansys Gateway powered by AWS, a cloud-based simulation software platform, drives efficiencies in automotive engineering simulations. Complete Ansys simulation and CAE/CAD developments can be managed in the cloud with access to AWS’s latest hardware instances, providing significant runtime acceleration.
Two recent studies show how Ansys Gateway powered by AWS can balance run times and costs, making it a compelling solution for automotive development.
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.