Supermicro Unveils Portfolio of Air and Liquid Cooled Systems Incorporating 4th Gen Intel Xeon Scalable Processors

November 16, 2022

SAN JOSE, Calif., and DALLAS, Nov. 16, 2022 — Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, has unveiled the most extensive portfolio of servers and storage systems in the industry based on the upcoming 4th Gen Intel Xeon Scalable processor, formerly codenamed Sapphire Rapids.

Supermicro continues to use its Building Block Solutions approach to deliver state-of-the-art and secure systems for the most demanding AI, Cloud, and 5G Edge requirements. The systems support high-performance CPUs and DDR5 memory with up to 2X the performance and capacities up to 512GB DIMMs and PCIe 5.0, which doubles I/O bandwidth. Intel Xeon CPU Max Series CPUs (formerly codenamed Sapphire Rapids HBM High Bandwidth Memory (HBM)) is also available on a range of Supermicro X13 systems. In addition, support for high ambient temperature environments at up to 40° C (104° F), with servers designed for air and liquid cooling for optimal efficiency, are rack-scale optimized with open industry standard designs and improved security and manageability.

“Supermicro is once again at the forefront of delivering the broadest portfolio of systems based on the latest technology from Intel,” stated Charles Liang, president and CEO of Supermicro. “Our Total IT Solutions strategy enables us to deliver a complete solution to our customers, which includes hardware, software, rack-scale testing, and liquid cooling. Our innovative platform design and architecture bring the best from the 4th Gen Intel Xeon Scalable processors, delivering maximum performance, configurability, and power savings to tackle the growing demand for performance and energy efficiency. The systems are rack-scale optimized with Supermicro’s significant growth of rack-scale manufacturing of up to 3X rack capacity.”

In addition, the workload-optimized system portfolio is the ideal match for the new Intel Xeon processor’s built-in application-optimized accelerators. The X13 systems portfolio, including the SuperBlade servers can make optimal use of Intel Advanced Matrix Extensions (Intel AMX) for improved specific deep learning performance. BigTwin and GrandTwin based Cloud, and Web Service workloads can leverage Intel Data Streaming Accelerator (Intel DSA) to optimize streaming-data-movement and transformation operations and Intel Quick Assist Technology (Intel QAT) for cryptographic algorithms. With Intel VRAN Boost, systems such as the Hyper accelerate 5G, Edge performance as well as reduce power consumption.

The Supermicro portfolio of X13 systems is performance optimized, energy efficient, incorporates improved manageability and security, is open, and is rack-scale optimized.

Performance Optimized

  • Support for the most performant CPUs and GPUs up to 700W.
  • DDR5 with up to 4800 MT/s memory, which speeds up data movement to and from the CPUs, improving execution times.
  • Support for PCIe 5.0, which doubles the bandwidth to peripherals, reducing communication time to storage or hardware accelerators.
  • Support for Compute Express Link (CXL 1.1) allows applications to share resources, enabling applications to work with much larger data sets than ever before.
  • AI and Metaverse ready with a wide range of GPUs, including NVIDIA, AMD, and Intel accelerators.
  • Support for multiple 400G InfiniBand and Data Processing Units (DPU) enables real-time collaboration with extremely low latencies.
  • Supports the Intel Xeon CPU Max Series (formerly codenamed Sapphire Rapids HBM), providing a 4X increase in memory bandwidth, and the Intel Data Center GPU Max Series (formerly codenamed Ponte Vecchio).

Energy Efficient – Reduces Datacenter OPEX

  • The systems can run in high-temperature data center environments up to 40° C (104° F), reducing cooling costs.
  • Supports free-air cooling or rack-scale liquid cooling technologies.
  • Support for multiple airflow cooling zones for maximum CPU and GPU performance.
  • In-house design of Titanium level power supplies ensures improved operational efficiency.

Improved Security and Manageability

  • NIST 800-193 compliant hardware platform Root of Trust (RoT) on every server node provides secure boots, secure firmware updates, and automatic recovery.
  • Second-generation Silicon RoT designed to include industry standards opens up tremendous opportunities for collaboration and innovation.
  • Open industry standards-based attestation/supply chain assurance from motherboard manufacturing through server production to customer. Supermicro has cryptographically attested the integrity of each component and firmware using signed certificates and secure device identity.
  • Run-time BMC protections continuously monitor threats and provide notification services.
  • Hardware TPMs provide additional capabilities and measurements needed to run systems in secure environments.
  • Remote Management built on industry standard and secure Redfish APIs enables seamless integration of Supermicro products into existing infrastructure.
  • Comprehensive software suite that enables rack management at scale for IT infrastructure solutions deployed across the core to the edge.
  • Integrated and verified solutions with 3rd party standard hardware and firmware enable the best out-of-the-box experience for IT administrators.

Support for Open Industry Standards

  • E1.S provides a future-proof platform with a common connector for all form factors, a wide range of power profiles, and improved thermal profiles.
  • OCP 3.0 compliant Advanced IO module (AIOM) cards, which will provide up to 400 Gbps bandwidth based on PCIe 5.0.
  • OCP Open Accelerator Module Universal Base Board Design for the GPU complex.
  • Open ORV2-compliant DC-powered rack bus bar.
  • Open BMC and Open BIOS (OCP OSF) support on select products.

The Supermicro X13 Portfolio Includes the following:

SuperBlade – Supermicro’s high-performance, density-optimized, and energy-efficient SuperBlade can significantly reduce initial capital and operational expenses for many organizations. SuperBlade utilizes shared, redundant components, including cooling, networking, and power, to deliver the compute performance of a full server rack in a much smaller physical footprint. These systems are optimized for AI, Data Analytics, HPC, Cloud, and Enterprise workloads.

GPU Servers with PCIe GPUs – Optimized for AI, Deep Learning, HPC, and high-end graphics professionals, providing maximum acceleration, flexibility, high performance, and balanced solutions. Supermicro GPU-optimized systems support advanced accelerators and deliver both dramatic performance gains and cost savings. These systems are designed for HPC, AI/ML, rendering, and VDI workloads.

Universal GPU Servers – The X13 Universal GPU systems are open, modular, standards-based servers that provide superior performance and serviceability with dual 4th Gen Intel Xeon Scalable processors and a hot-swappable, toolless design. GPU options include the latest PCIe, OAM, and NVIDIA SXM technology. These GPU servers are ideal for workloads that include the most demanding AI training performance, HPC, and Big Data Analytics.

Hyper – The X13 Hyper series brings next-generation performance to Supermicro’s range of rackmount servers, built to take on the most demanding workloads along with the storage & I/O flexibility that provides a custom fit for a wide range of application needs.

BigTwin – The X13 BigTwin systems provide superior density, performance, and serviceability with dual 4th Gen Intel Xeon Scalable processors per node and hot-swappable tool-less design. These systems are ideal for cloud, storage, and media workloads.

GrandTwin – The X13 GrandTwin is an all-new architecture purpose-built for single-processor performance. The design maximizes compute, memory and efficiency to deliver maximum density. Powered by a single 4th Gen Intel Xeon Scalable processor, GrandTwin’s flexible modular design can be easily adapted for a wide range of applications, with the ability to add or remove components as required, reducing cost. In addition, the Supermicro GrandTwin features front (cold aisle) hot-swappable nodes, which can be configured with either front or rear I/O for easier serviceability. The X13 GrandTwin is ideal for workloads such as CDN, Multi-Access Edge Computing, Cloud Gaming, and High-Availability Cache Clusters.

FatTwin – The X13 FatTwin high-density systems offer an advanced multi-node 4U twin architecture with 8 or 4 nodes (single processor per node). Front-accessible service design allows cold-aisle serviceability, with highly configurable systems optimized for data center compute or storage density. In addition, the Fat Twin supports all-hybrid hot-swappable NVMe/SAS/SATA hybrid drive bays with up to 6 drives per node (8-node) and up to 8 drives per node (4-node).

Edge Servers – Optimized for telco Edge workloads, Supermicro X13 Edge systems offer high-density processing power in compact form factors. Flexible power with both AC and DC configurations available and enhanced operating temperatures up to 55° C(131° F) make these systems ideal for Multi-Access Edge Computing, Open RAN, and outdoor Edge deployments. Supermicro SuperEdge brings high-density compute and flexibility to the intelligent Edge, with three hot-swappable single processor nodes and front I/O in a short-depth 2U form factor.

CloudDC – Ultimate flexibility on I/O and storage with 2 or 6 PCIe 5.0 slots and dual AIOM slots (PCIe 5.0; OCP 3.0 compliant) for maximum data throughput. Supermicro X13 CloudDC systems are designed for convenient serviceability with tool-less brackets, hot-swap drive trays and redundant power supplies that ensure a rapid deployment and more efficient maintenance in data centers.

WIO – Supermicro WIO systems offer a wide range of I/O options to deliver truly optimized systems for specific requirements. Users can optimize the storage and networking alternatives to accelerate performance, increase efficiency and find the perfect fit for their applications.

Petascale Storage – The X13 All-Flash NVMe systems offer industry-leading storage density and performance with EDSFF drives, allowing unprecedented capacity and performance in a single 1U chassis. The first in a coming lineup of X13 storage systems, this latest E1.S server supports both 9.5mm and 15mm EDSFF media, now shipping from all the industry-leading flash vendors.

MP Servers – The X13 MP servers bring maximum configurability and scalability in a 2U design. The X13 multi-processor systems bring new levels of compute performance and flexibility with support for 4th Gen Intel Xeon Scalable processors to support mission-critical enterprise workloads.

For more information about Supermicro servers with 4th Gen Intel Xeon Scalable processors, please visit: www.supermicro.com/x13.

The Supermicro X13 JumpStart program gives qualified customers early remote access to Supermicro X13 servers for workload testing on 4th Gen Intel Xeon Scalable processor-based Supermicro X13 systems. Visit www.supermicro.com/jumpstart/x13 to learn more.

X13 Pre-Release Webinar

Learn more about the Supermicro X13 product line by registering or viewing the X13 Pre-Release webinar, where you will get an informational preview of the broadest range of next-generation systems optimized for tomorrow’s data center workloads. Click here to tune into the webinar, starting November 17, 2022 at 10:00 AM, PST.


Source: Supermicro

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

From Exasperation to Exascale: HPE’s Nic Dubé on Frontier’s Untold Story

December 2, 2022

The Frontier supercomputer – still fresh off its chart-topping 1.1 Linpack exaflops run and maintaining its number-one spot on the Top500 list – was still very much in the spotlight at SC22 in Dallas last month. Six Read more…

At SC22, Carbon Emissions and Energy Costs Eclipsed Hardware Efficiency

December 2, 2022

The race to ever-better flops-per-watt and power usage effectiveness (PUE) has, historically, dominated the conversation over sustainability in HPC – but at SC22, held last month in Dallas, something felt different. Ac Read more…

HPC Career Notes: December 2022 Edition

December 1, 2022

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

IBM Quantum Summit: Osprey Flies; Error Handling Progress; Quantum-centric Supercomputing

December 1, 2022

Part scorecard, part grand vision, IBM’s annual Quantum Summit held last month is a fascinating snapshot of IBM’s progress, evolving technology roadmap, and issues facing the quantum landscape broadly. Thankfully, IB Read more…

AWS Introduces a Flurry of New EC2 Instances at re:Invent

November 30, 2022

AWS has announced three new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by AWS-designed chips, as well as several new Intel-powered instances – including ones targeting HPC – at its AWS re:Invent 2022 Read more…

AWS Solution Channel

Shutterstock 110419589

Thank you for visiting AWS at SC22

Accelerate high performance computing (HPC) solutions with AWS. We make extreme-scale compute possible so that you can solve some of the world’s toughest environmental, social, health, and scientific challenges. Read more…

 

shutterstock_1431394361

AI and the need for purpose-built cloud infrastructure

Modern AI solutions augment human understanding, preferences, intent, and even spoken language. AI improves our knowledge and understanding by delivering faster, more informed insights that fuel transformation beyond anything previously imagined. Read more…

Quantum Riches and Hardware Diversity Are Discouraging Collaboration

November 28, 2022

Quantum computing is viewed as a technology for generations, and the spoils for the winners are huge, but the diversity of technology is discouraging collaboration, an Intel executive said last week. There are close t Read more…

From Exasperation to Exascale: HPE’s Nic Dubé on Frontier’s Untold Story

December 2, 2022

The Frontier supercomputer – still fresh off its chart-topping 1.1 Linpack exaflops run and maintaining its number-one spot on the Top500 list – was still v Read more…

At SC22, Carbon Emissions and Energy Costs Eclipsed Hardware Efficiency

December 2, 2022

The race to ever-better flops-per-watt and power usage effectiveness (PUE) has, historically, dominated the conversation over sustainability in HPC – but at S Read more…

HPC Career Notes: December 2022 Edition

December 1, 2022

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it Read more…

IBM Quantum Summit: Osprey Flies; Error Handling Progress; Quantum-centric Supercomputing

December 1, 2022

Part scorecard, part grand vision, IBM’s annual Quantum Summit held last month is a fascinating snapshot of IBM’s progress, evolving technology roadmap, and Read more…

AWS Introduces a Flurry of New EC2 Instances at re:Invent

November 30, 2022

AWS has announced three new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by AWS-designed chips, as well as several new Intel-powered instances Read more…

Quantum Riches and Hardware Diversity Are Discouraging Collaboration

November 28, 2022

Quantum computing is viewed as a technology for generations, and the spoils for the winners are huge, but the diversity of technology is discouraging collaborat Read more…

2022 HPC Road Trip: Los Alamos

November 23, 2022

With SC22 in the rearview mirror, it’s time to get back to the 2022 Great American Supercomputing Road Trip. To refresh everyone’s memory, I jumped in the c Read more…

QuEra’s Quest: Build a Flexible Neutral Atom-based Quantum Computer

November 23, 2022

Last month, QuEra Computing began providing access to its 256-qubit, neutral atom-based quantum system, Aquila, from Amazon Braket. Founded in 2018, and built o Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

RISC-V Is Far from Being an Alternative to x86 and Arm in HPC

November 18, 2022

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

AMD Thrives in Servers amid Intel Restructuring, Layoffs

November 12, 2022

Chipmakers regularly indulge in a game of brinkmanship, with an example being Intel and AMD trying to upstage one another with server chip launches this week. But each of those companies are in different positions, with AMD playing its traditional role of a scrappy underdog trying to unseat the behemoth Intel... Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

JPMorgan Chase Bets Big on Quantum Computing

October 12, 2022

Most talk about quantum computing today, at least in HPC circles, focuses on advancing technology and the hurdles that remain. There are plenty of the latter. F Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Leading Solution Providers

Contributors

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

SC22 Unveils ACM Gordon Bell Prize Finalists

August 12, 2022

Courtesy of the schedule for the SC22 conference, we now have our first glimpse at the finalists for this year’s coveted Gordon Bell Prize. The Gordon Bell Pr Read more…

Intel Is Opening up Its Chip Factories to Academia

October 6, 2022

Intel is opening up its fabs for academic institutions so researchers can get their hands on physical versions of its chips, with the end goal of boosting semic Read more…

AMD’s Genoa CPUs Offer Up to 96 5nm Cores Across 12 Chiplets

November 10, 2022

AMD’s fourth-generation Epyc processor line has arrived, starting with the “general-purpose” architecture, called “Genoa,” the successor to third-gen Eypc Milan, which debuted in March of last year. At a launch event held today in San Francisco, AMD announced the general availability of the latest Epyc CPUs with up to 96 TSMC 5nm Zen 4 cores... Read more…

AMD Previews 400 Gig Adaptive SmartNIC SOC at Hot Chips

August 24, 2022

Fresh from finalizing its acquisitions of FPGA provider Xilinx (Feb. 2022) and DPU provider Pensando (May 2022) ), AMD previewed what it calls a 400 Gig Adaptive smartNIC SOC yesterday at Hot Chips. It is another contender in the increasingly crowded and blurry smartNIC/DPU space where distinguishing between the two isn’t always easy. The motivation for these device types... Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire