November 23, 2010
Addison and Michael revisit some news items from last week's Supercomputing Conference. Read more…
November 19, 2010
Addison and Michael consider the results of the TOP500 and Green500, pick the winners and losers of SC10, and discuss the biggest news of the week. Read more…
November 19, 2010
If there was a dominating theme at the Supercomputing Conference this year, it had to be GPU computing. Read more…
November 17, 2010
Lost in the hoopla about the ascendency of China and GPGPUs in the TOP500 is the continuing saga of the InfiniBand-Ethernet interconnect rivalry. Read more…
November 16, 2010
Although the parallel programming landscape is relatively young, it's already easy to get lost in. Beside legacy frameworks like MPI and OpenMP, we now have NVIDIA's CUDA, OpenCL, Cilk, Intel Threading Building Blocks, Microsoft's parallel programming extensions for .NET, and a whole gamut of PGAS languages. And according to Intel's Tim Mattson, that's not necessarily a good thing. Read more…
November 16, 2010
NVIDIA's CUDA is easily the most popular programming language for general-purpose GPU computing. But one of the more interesting developments in the CUDA-verse doesn't really involve GPUs at all. In September, HPC compiler vendor PGI (The Portland Group Inc.) announced its intent to build a CUDA compiler for x86 platforms. The technology will be demonstrated for the first time in public at SC10 this week in New Orleans. Read more…
November 15, 2010
Data-intensive applications are quickly emerging as a significant new class of HPC workloads. For this class of applications, a new kind of supercomputer, and a different way to assess them, will be required. That is the impetus behind the Graph 500, a set of benchmarks that aim to measure the suitability of systems for data-intensive analytics applications. Read more…
November 15, 2010
SGI has made good on its promise to create a petaflop-in-a-cabinet supercomputer that can scale up to tens and even hundreds of cabinets. Developed under the code name "Project Mojo," the company has dubbed the new product Prism XL. SGI will be showcasing the system this week in their exhibit booth at the Supercomputing Conference in New Orleans. Read more…
November 15, 2010
Top seven supercomputers make it into the petaflop club. Read more…
November 14, 2010
Like every technology-based sector, high performance computing takes its biggest leaps by the force of disruptive innovation, a term coined by the man who will keynote this year's Supercomputing Conference (SC10) in New Orleans. Clayton M. Christensen doesn't know a whole lot about supercomputing, but he knows a great deal about the forces that drive it. Read more…
November 11, 2010
A short list of "can't miss" sessions at this year's Supercomputing conference. Read more…
The increasing complexity of electric vehicles result in large and complex computational models for simulations that demand enormous compute resources. On-premises high-performance computing (HPC) clusters and computer-aided engineering (CAE) tools are commonly used but some limitations occur when the models are too big or when multiple iterations need to be done in a very short term, leading to a lack of available compute resources. In this hybrid approach, cloud computing offers a flexible and cost-effective alternative, allowing engineers to utilize the latest hardware and software on-demand. Ansys Gateway powered by AWS, a cloud-based simulation software platform, drives efficiencies in automotive engineering simulations. Complete Ansys simulation and CAE/CAD developments can be managed in the cloud with access to AWS’s latest hardware instances, providing significant runtime acceleration.
Two recent studies show how Ansys Gateway powered by AWS can balance run times and costs, making it a compelling solution for automotive development.
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.