October 13, 2015
In yet another example of HPC, cloud and big data convergence, Adaptive Computing announced today that its cluster and cloud management software Moab is part of Read more…
May 12, 2014
Guest Editorial I think a lot about big data and the challenges it proposes. I guess I started thinking about big data a long time before I ever heard the te Read more…
February 27, 2014
In high performance computing, the time-honored concept of creating tailored workflows to address complex requirements is nothing new. However, with the advent Read more…
April 16, 2013
Despite the important advances that middleware enables in both the HPC and enterprise spheres, it generally fails to elicit the same excitement as, say, brand-new leadership class hardware. But middleware, such as Adaptive Computing's intelligent management engine, Moab, is cool and you don't have to take Adaptive's word for it. During the company's annual user event last week, Gartner gave Adaptive its "Cool Vendor" stamp of approval. Read more…
November 27, 2012
At SC12, Adaptive announced its Moab HPC Suite 7.2 release, which includes several productivity enhancements and introduces support for Intel Xeon Phi coprocessors. The workload management vendor also launched two new products as part of its Moab HPC Suite: Application Portal Edition and Remote Visualization Edition. Read more…
October 12, 2012
In this era of heterogeneous architectures and hybrid infrastructures, workload managers are necessarily becoming more and more sophisticated. Looking toward the future of workload management, there are three major trends: application insight, big data awareness, and HPC clouds. While inter-related, each has something important to contribute to the advancement of HPC. Read more…
April 23, 2012
During Moabcon, which took place earlier this month, Adaptive Computing highlighted its recent product refresh, laid out future plans and provided a forum for customer feedback. The well-attended event stirred lots of discussion around the relevancy of cloud computing to HPC, and contributor Steve Campbell was there to capture the proceedings. Read more…
March 20, 2012
Adaptive Computing recently released a new version of Moab 7.0, both the HPC Suite (basic and enterprise editions) and also the Cloud Suite. While the workload management vendor has made important enhancements to its portfolio, what's even more interesting is how these offerings fit into an increasingly cloud-based IT environment. This in-depth interview with Adaptive Computing CEO Robert Clyde and Chad Harrington, Adaptive's vice president of marketing, shows how the company has leveraged its HPC roots to strengthen its cloud offerings. Read more…
The increasing complexity of electric vehicles result in large and complex computational models for simulations that demand enormous compute resources. On-premises high-performance computing (HPC) clusters and computer-aided engineering (CAE) tools are commonly used but some limitations occur when the models are too big or when multiple iterations need to be done in a very short term, leading to a lack of available compute resources. In this hybrid approach, cloud computing offers a flexible and cost-effective alternative, allowing engineers to utilize the latest hardware and software on-demand. Ansys Gateway powered by AWS, a cloud-based simulation software platform, drives efficiencies in automotive engineering simulations. Complete Ansys simulation and CAE/CAD developments can be managed in the cloud with access to AWS’s latest hardware instances, providing significant runtime acceleration.
Two recent studies show how Ansys Gateway powered by AWS can balance run times and costs, making it a compelling solution for automotive development.
Five Recommendations to Optimize Data Pipelines
When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.
With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.
To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.