March 16, 2023
Sometime later this year, perhaps around July, the Department of Defense is expected to announce the sites and focus of up to nine hubs associated with the Micr Read more…
December 8, 2022
The U.S. Department of Defense wielded its JEDI powers to procure public cloud services with a diplomatic end to a feud between Amazon and Google to win the multi-billion dollar contract. The DoD broke up a $9 billion contract between the top four cloud providers – Google, Amazon, Microsoft and Oracle – for the Joint Warfighting Cloud Capability initiative, which will bring the defense branches – Air Force, Army... Read more…
August 31, 2020
The Pentagon’s top research agency is extending its technology chops with the appointment of a new director with extensive industry experience in strategic se Read more…
September 26, 2019
TX-GAIA (Green AI Accelerator), the new 4.7-petaflops system built by HPE and installed at MIT's Lincoln Laboratory's Supercomputing Center (LLSC) in Holyoke, M Read more…
June 28, 2013
Indiana University won $910,000 from the United States Department of Defense for the study of the problems surrounding software-defined networking, including the security of such networking systems. Read more…
June 12, 2013
A Cray supercomputer will help with the design of Australia's next-generation submarine platform. The government's Department of Defence is expected to take delivery on the new system in July. Read more…
July 17, 2012
The Department of Defense has announced a cloud computing strategy that aligns the agency with Federal efficiency standards. It details the transition from traditional IT services, including methods to promote adoption, establish an enterprise cloud infrastructure and consolidate datacenter resources. Beyond technical details, the program also aims to overcome any cultural challenges associated with migration to cloud technology. Read more…
February 25, 2010
Supercomputer maker off to a running start in 2010. Read more…
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.
KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.
Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.