Oct. 1, 2024 — IBM has announced that NVIDIA H100s Tensor Core GPU instances are now globally available on IBM Cloud. IBM is extending its high-performance computing (HPC) offerings, giving enterprises more power and versatility to carry out research, innovation and business transformation.
With the general availability of NVIDIA H100 Tensor Core GPU instances on IBM Cloud, businesses will have access to a powerful platform for AI applications, including large language model (LLM) training. This new GPU joins IBM Cloud’s existing lineup of accelerated computing offerings to leverage during every stage of an enterprise’s AI implementation.
Clients looking to transform with AI can apply IBM’s watsonx AI studio, data lakehouse, and governance toolkit to even the most demanding, compute-intensive applications, raising the ceiling for innovation—even in the most highly regulated industries.
Cutting-edge Power
NVIDIA H100 on IBM Cloud builds on IBM’s work to support generative AI model training and inferencing. Last year, IBM began making the NVIDIA A100 Tensor Core GPUs available to clients through IBM Cloud, giving them immense processing headroom to innovate with AI via IBM’s watsonx platform, or as GPUaaS for custom needs.
The new NVIDIA H100 Tensor Core GPU takes this progression a step further, which NVIDIA reports can enable up to 30X faster inference performance over the A100. It has the potential to give IBM Cloud customers a range of processing capabilities while also addressing the cost of enterprise-wide AI tuning and inferencing. Businesses can start small, training small-scale models, fine-tuning models, or deploying applications like chatbots, natural language search, and using forecasting tools using NVIDIA L40S and L4 Tensor Core GPUs. As their needs grow, IBM Cloud customers can adjust their spend accordingly, eventually harnessing the H100 for the most demanding AI and HPC use cases.
By offering direct access to these NVIDIA GPUs on IBM Cloud, in VPC and managed Red Hat OpenShift environments, IBM is driving easier transformation to gain a competitive advantage with generative AI. Combined with watsonx for building AI models and managing data complexity and governance, as well as IBM’s tools for security, enterprises of all sizes have the potential to tackle the challenges of scaling AI.
Why IBM Cloud for AI and HPC?
IBM Cloud offers a comprehensive platform for enterprises to build custom AI applications, manage their data, and support security and compliance initiatives.
- Data privacy, security and compliance support: IBM Cloud applies multi-level security protocols designed to protect AI and HPC processes and guard against data leakage and data privacy concerns. It also includes built-in controls to establish infrastructure and data guardrails for AI workloads.
- AI model governance: Operationalizing AI efficiently is not possible without end-to-end AI lifecycle tracking using automated processes for clarity, monitoring and cataloging. Built on the IBM watsonx AI and data platform, watsonx.governance helps direct and manage organizations’ AI activities and monitors them for quality and drift. Regulators and auditors can get access to documentation that provides explanations of the model’s behavior and predictions.
- Deployment automation: IBM Cloud automates the deployment of AI-powered applications, to help address the time and errors that could occur with manual configuration. It also provides essential services such as AI lifecycle management solutions, serverless platform, storage, security and solutions to help clients monitor their compliance.
The NVIDIA 100 Tensor Core GPU instances on IBM Cloud are now available in IBM’s multi-zone regions (MZRs) in North America, Latin America, Europe, Japan and Australia. For more information, visit here.
Source: Rohit Badlaney, IBM