Supermicro Previews New Max Performance Intel-based X14 Servers for AI and HPC Workloads

August 29, 2024

SAN JOSE, Calif., Aug. 29, 2024 — Supermicro, Inc. is previewing new, completely re-designed X14 server platforms which will leverage next-generation technologies to maximize performance for compute-intensive workloads and applications. Building on the success of Supermicro’s efficiency-optimized X14 servers that launched in June 2024, the new systems feature significant upgrades across the board, supporting a never-before-seen 256 performance cores (P-cores) in a single node, memory support up for MRDIMMs at 8800MT/s, and compatibility with next-generation SXM, OAM, and PCIe GPUs.

This combination can drastically accelerate AI and compute as well as significantly reduce the time and cost of large-scale AI training, high-performance computing, and complex data analytics tasks. Approved customers can secure early access to complete, full-production systems via Supermicro’s Early Ship Program or for remote testing with Supermicro JumpStart.

“We continue to add to our already comprehensive Data Center Building Block solutions with these new platforms, which will offer unprecedented performance, and new advanced features,” said Charles Liang, president and CEO of Supermicro. “Supermicro is ready to deliver these high-performance solutions at rack-scale with the industry’s most comprehensive direct-to-chip liquid cooled, total rack integration services, and a global manufacturing capacity of up to 5,000 racks per month including 1,350 liquid cooled racks. With our worldwide manufacturing capabilities, we can deliver fully optimized solutions which accelerate our time-to-delivery like never before, while also reducing TCO.”

Click here for more information.

These new X14 systems feature completely re-designed architectures including new 10U and multi-node form factors to enable support for next-generation GPUs and higher CPU densities, updated memory slot configurations with 12 memory channels per CPU and new MRDIMMs which provide up to 37% better memory performance compared to DDR5-6400 DIMMS. In addition, upgraded storage interfaces will support higher drive densities, and more systems with liquid cooling integrated directly into the server architecture.

The new additions to the Supermicro X14 family comprise more than ten new systems, several of which are completely new architectures in three distinct, workload-specific categories:

  • GPU-optimized platforms designed for pure performance and enhanced thermal capacity to support the highest-wattage GPUs. System architectures have been built from the ground up for large-scale AI training, LLMs, generative AI, 3D media, and virtualization applications.
  • High compute-density multi-nodes including SuperBlade and the all-new FlexTwin, which leverage direct-to-chip liquid cooling to significantly increase the number of performance cores in a standard rack compared to previous generations of systems.
  • Market-proven Hyper rackmounts combine single or dual socket architectures with flexible I/O and storage configurations in traditional form factors to help enterprises and data centers scale up and out as their workloads evolve.

Supermicro X14 performance-optimized systems will support the soon-to-be-released Intel Xeon 6900 series processors with P-cores and will also offer socket compatibility to support Intel Xeon 6900 series processors with E-cores in Q1’25. This designed-in feature allows workload-optimized systems for either performance-per-core or performance-per-watt.

“The new Intel Xeon 6900 series processors with P-cores are our most powerful ever, with more cores and exceptional memory bandwidth & I/O to achieve new degrees of performance for AI and compute-intensive workloads,” said Ryan Tabrah, VP and GM of Xeon 6 at Intel. “Our continued partnership with Supermicro will result in some of the industry’s most powerful systems that are ready to meet the ever-heightening demands of modern AI and high-performance computing.”

When configured with Intel Xeon 6900 series processors with P-cores, Supermicro systems support new FP16 instructions on the built-in Intel AMX accelerator to further enhance AI workload performance. These systems include 12 memory channels per CPU with support for both DDR5-6400 and MRDIMMs up to 8800MT/s, CXL 2.0, and feature more extensive support for high-density, industry-standard EDSFF E1.S and E3.S NVMe drives.

Supermicro Liquid Cooling Solutions

Complementing this expanded X14 product portfolio are Supermicro’s rack-scale integration and liquid cooling capabilities. With an industry-leading global manufacturing capacity, extensive rack-scale integration and testing facilities, and a comprehensive suite of management software solutions, Supermicro designs, builds, tests, validates, and delivers complete solutions at any scale in a matter of weeks.

In addition, Supermicro offers a complete in-house developed liquid cooling solution including cold plates for CPUs, GPUs and memory, Cooling Distribution Units, Cooling Distribution Manifolds, hoses, connectors, and cooling towers. Liquid cooling can be easily included in rack-level integrations to further increase system efficiency, reduce instances of thermal throttling, and lower both the TCO and Total Cost to Environment (TCE) of data center deployments.

Upcoming Supermicro X14 performance-optimized systems include:

  • GPU-optimized – The highest performance Supermicro X14 systems designed for large-scale AI training, large language models (LLMs), generative AI and HPC, and supporting eight of the latest-generation SXM5 and SXM6 GPUs. These systems are available in air-cooled or liquid-cooled configurations.
  • PCIe GPU – Designed for maximum GPU flexibility, supporting up to 10 double-width PCIe 5.0 accelerator cards in a thermally-optimized 5U chassis. These servers are ideal for media, collaborative design, simulation, cloud gaming, and virtualization workloads.
  • Intel Gaudi 3 AI Accelerators – Supermicro also plans to deliver the industry’s first AI server based on the Intel Gaudi 3 accelerator hosted by Intel Xeon 6 processors. The system is expected to increase efficiency and lower the cost of large-scale AI model training and AI inferencing. The system features eight Intel Gaudi 3 accelerators on an OAM universal baseboard, six integrated OSFP ports for cost-effective scale-out networking, and an open platform designed to use a community-based, open-source software stack, requiring no software licensing costs.
  • SuperBlade – Supermicro’s X14 6U high-performance, density-optimized, and energy-efficient SuperBlade maximizes rack density, with up to 100 servers and 200 GPUs per rack. Optimized for AI, HPC, and other compute-intensive workloads, each node features air cooling or direct-to-chip liquid cooling to maximize efficiency and achieve the lowest PUE with the best TCO, as well as connectivity up to four integrated Ethernet switches with 100G uplinks and front I/O supporting a range of flexible networking options up to 400G InfiniBand or 400G Ethernet per node.
  • FlexTwin – The new Supermicro X14 FlexTwin architecture is designed to provide maximum compute power and density in a multi-node configuration with up to 24,576 performance cores in a 48U rack. Optimized for HPC and other compute-intensive workloads, each node features direct-to-chip liquid cooling only to maximize efficiency and reduce instances of CPU thermal throttling, as well as HPC Low Latency front and rear I/O supporting a range of flexible networking options up to 400G per node.
  • Hyper – X14 Hyper is Supermicro’s flagship rackmount platform designed to deliver the highest performance for demanding AI, HPC, and enterprise applications, with single or dual socket configurations supporting double-width PCIe GPUs for maximum workload acceleration. Both air cooling and direct-to-chip liquid cooling models are available to facilitate the support of top-bin CPUs without thermal limitations and reduce data center cooling costs while also increasing efficiency.

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro’s motherboard, power, and chassis design expertise further enable our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).


Source: Supermicro

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary technology that even established events focusing on HPC specific Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be natively integrated into four of the world's most advanced qu Read more…

Computing-Driven Medicine: Sleeping Better with HPC

September 10, 2024

As a senior undergraduate student at Fisk University in Nashville, Tenn., Ifrah Khurram's calculus professor, Dr. Sanjukta Hota, encouraged her to apply for the Sustainable Research Pathways Program (SRP). SRP was create Read more…

LLNL Engineers Harness Machine Learning to Unlock New Possibilities in Lattice Structures

September 9, 2024

Lattice structures, characterized by their complex patterns and hierarchical designs, offer immense potential across various industries, including automotive, aerospace, and biomedical engineering. With their outstand Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, integrated, and secured data. Now scientists working at univer Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently posted the following on X/Twitter: "This weekend, the @xA Read more…

Shutterstock 793611091

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary tech Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, in Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Shutterstock 1897494979

What’s New with Chapel? Nine Questions for the Development Team

September 4, 2024

HPC news headlines often highlight the latest hardware speeds and feeds. While advances on the hardware front are important, improving the ability to write soft Read more…

Critics Slam Government on Compute Speeds in Regulations

September 3, 2024

Critics are accusing the U.S. and state governments of overreaching by including limits on compute speeds in regulations and laws, which they claim will limit i Read more…

Shutterstock 1622080153

AWS Perfects Cloud Service for Supercomputing Customers

August 29, 2024

Amazon's AWS believes it has finally created a cloud service that will break through with HPC and supercomputing customers. The cloud provider a Read more…

HPC Debrief: James Walker CEO of NANO Nuclear Energy on Powering Datacenters

August 27, 2024

Welcome to The HPC Debrief where we interview industry leaders that are shaping the future of HPC. As the growth of AI continues, finding power for data centers Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Leading Solution Providers

Contributors

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire