March 21, 2023
If you are a die-hard Nvidia loyalist, be ready to pay a fortune to use its AI factories in the cloud. Renting the GPU company's DGX Cloud, which is an all-inclusive AI supercomputer in the cloud, starts at $36,999 per instance for a month. The rental includes access to a cloud computer with eight Nvidia H100 or A100 GPUs and 640GB... Read more…
May 16, 2022
Almost exactly a year ago, Google launched its Tensor Processing Unit (TPU) v4 chips at Google I/O 2021, promising twice the performance compared to the TPU v3. At the time, Google CEO Sundar Pichai said that Google’s datacenters would “soon have dozens of TPU v4 Pods, many of which will be... Read more…
March 10, 2022
Add Amazon Web Services to the growing list of companies (tech and otherwise) that are curtailing business with Russia in opposition to President Putin’s invasion of Ukraine. As reported in the New York Times and then by Amazon itself, Amazon Web Services is blocking new sign-ups from Russia and Belarus. Existing customers are not impacted. “We’ve suspended shipment of retail... Read more…
June 11, 2021
Last Fall Bill Magro joined Google as CTO of HPC, a newly created position, after two decades at Intel, where he was responsible for the company's HPC strategy. Read more…
March 19, 2021
Eight months after making its Nvidia A100 A2 VM cloud services available on Google Cloud as a beta service for customers, Google Cloud has announced that they a Read more…
March 18, 2021
Now that AMD unveiled its latest third-generation Epyc CPU product line for HPC, enterprise and cloud workloads on March 15, the company’s server and services partners continue to announce their own plans for bringing Epyc-equipped products to market. Read more…
March 16, 2021
Microsoft Azure and Oracle Cloud Infrastructure (OCI) yesterday announced general availability (GA) of instances using AMD’s new third-generation Epyc (Milan) Read more…
February 16, 2021
With the one-year mark of the pandemic in the U.S. rapidly approaching and vaccinations ramping up, decision-makers and stakeholders are beginning to look back Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.