Supermicro Debuts SuperServer Optimized for NVIDIA Tesla K40 GPU

November 18, 2013

SAN JOSE, Calif., Nov. 18 — Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing, exhibits its latest high-performance computing (HPC) solutions at the Supercomputing 2013 (SC13) conference this week in Denver, Colorado. In sync with the launch of the NVIDIA Tesla K40 GPU accelerator, Supermicro debuts new 4U 8x GPU SuperServer that supports the new and existing active or passive GPUs (up to 300W) with an advanced cooling architecture that splits the CPU (up to 150W x2) and GPU (up to 300W x8) cooling zones on separate levels for maximum performance and reliability. In addition, Supermicro has 1U, 2U, 3U SuperServers, FatTwin, SuperWorkstations and SuperBlade platforms ready to support the new K40 GPU accelerator. These high performance, high density servers support up to twenty GPU accelerators per system and in scaled out Super Clusters provide massive parallel processing power to accelerate the most demanding compute intensive applications. Supermicro’s new platforms extend the industry’s most comprehensive line of servers, storage, networking and server management solutions optimized for Engineering and Scientific Research, Modeling, Simulation and HPC supercomputing applications.

“Supermicro’s HPC servers and solutions deliver the performance, scalability and reliability needed to answer the most complex challenges of our time,” said Charles Liang, President and CEO of Supermicro. “Our extensive supercomputing solutions range from 5x GPU workstations to 6x GPUs in 1U/2U, 8x GPUs in 4U and 30x GPUs in 7U blade severs. Our GPU platforms are unrivaled in the industry and provide exactly the best configurations optimized for any scientific, engineering or big data analytics application. With the addition of a new 4U single node, 8x GPU server and a new 2U TwinPro platform, the HPC community can build even higher density compute clusters to deliver maximum parallel computing performance per watt, per dollar, per square foot.”

“The Tesla K40 GPU accelerator provides double the memory and 10 times higher performance than today’s fastest CPUs, enabling enterprise data center and HPC customers to solve their most complex engineering and big data analytics computing challenges,” said Sumit Gupta, general manager of Tesla Accelerated Computing Products at NVIDIA. “When combined with Supermicro’s high-density, scalable systems, the new Kepler-based accelerators deliver high performance computational horsepower with maximum energy efficiency.”

Supermicro’s new GPU accelerator-optimized server solutions on exhibit this week at SC13 include:

· NEW 4U 8x GPU SuperServer (SYS-4027GR-TR) – Ultra-high GPU density with massive parallel processing power in 4U form factor. System supports 8x NVIDIA Tesla K40, K20, K20X or K10 active or passive GPU accelerators (up to 300W) + additional full-height, full-length 2x PCI-E 3.0 x8 and 1x PCI-E 2.0 x4 slots, dual Intel Xeon E5-2600 v2 “Ivy Bridge” processors (up to 150W), 24x Reg. ECC DDR3 1600MHz DIMM support (up to 768GB), 2x 10GbaseT or GbE ports with 1x dedicated IPMI 2.0 port and 24x 2.5” hot-swap SAS/SATA/SSD bays. The 30” depth chassis features redundant Platinum Level high-efficiency 1600W power supplies (up to 4) and an advanced thermal cooling architecture with two rows of mid-chassis fans and separate CPU/GPU cooling zones.

· NEW 2U TwinPro (SYS-2027PR-DTR) / TwinPro² (SYS-2027PR-HTR) – Supermicro takes its 2U Twin architecture to the next level of performance, flexibility and expandability with the high efficiency 2-node TwinPro and high density 4-node TwinPro². Each node supports dual Intel Xeon E5-2600 v2 processors and the 2-node 2U TwinPro accommodates a NVIDIA Tesla GPU accelerator with support for two additional add on cards per node. The systems feature greater memory capacity up to 16x DIMMs, 12Gb/s SAS 3.0 support, NVMe optimized PCI-E SSD interface, additional PCI-E expansion slots, 10GbE and onboard QDR/FDR InfiniBand for maximized I/O.

· 1U SuperServer (SYS-1027GR-TRT2) – supports 3x GPUs, dual Intel Xeon E5-2600 series processors (up to 130W TDP), up to 512GB memory in 16x DIMM slots and 4x hot-swap 2.5” SATA3 HDD bays. Features 1600W redundant Platinum Level high-efficiency (94%+) power supplies and smart server management tools.

· 1U SuperServer (SYS-1027GR-TQFT) – supports 4x GPUs, dual Intel Xeon E5-2600 series processors (up to 115W TDP), up to 256GB memory and 4x hot-swap 2.5” SATA3 HDD bays. Features 1800W Platinum Level high-efficiency (94%+) power supplies and smart server management tools.

· 2U SuperServer (SYS-2027GR-TRFH) – supports 6x GPUs, dual Intel Xeon E5-2600 series processors (up to 130W TDP), up to 256GB memory and 10x hot-swap 2.5” SATA HDD bays. Features redundant 1800W Platinum Level high-efficiency (94%+) power supply and smart server management tools.

· 3U SuperServer® (SYS-6037R-72RFT+) – supports 2x GPUs (passive cooling with optional GPU fan kit installed), dual Intel Xeon E5-2600 v2 series processors (up to 135W TDP), up to 1.5TB memory and 8x hot-swap 3.5” SAS2 HDD bays. Features redundant 1280W Platinum Level high-efficiency (94%) digital switching power supplies.

· 4U FatTwin™ (SYS-F627G3-FT+ / G2-FT+) – 4x hot-plug nodes supporting 12x GPUs (3x per node) and dual Intel Xeon E5-2600 series processors (up to 130W TDP) per node. Available with front I/O and 2x 3.5” or 6x 2.5” hot-swap HDD bays. Features redundant 1620W Platinum Level high-efficiency (94%+) power supplies.

· 4U/Tower SuperWorkstation (SYS-7047GR-TRF / -TPRF) – Ultimate performance (NVIDIA Maximus Technology Certified) and expandability with support for up to 5x GPUs, dual Intel Xeon E5-2600 series processors, up to 512GB memory and 8x hot-swap 3.5” HDD bays. Tower/4U rackable chassis features redundant 1620W Platinum Level high-efficiency (94%) power supplies.

· SuperBlade Solutions – The all-in-one 7U SuperBlade features redundant Platinum Level high-efficiency (94%+) power supplies, high speed connectivity through network switch modules, including 56Gb/s FDR IB (SBM-IBS-F3616M), FC/FCoE (SBM-XEM-F8X4SM), 10GbE (SBM-XEM-X10SM) and 1/10GbE (SBM-GEM-X3S+) and centralized remote management software.

· 3x GPU SuperBlade (SBI-7127RG3) – Supports 3x NVIDIA Tesla K20X GPUs in the SXM form factor, dual Intel Xeon E5-2600 series processors, up to 256GB memory and onboard BMC for IPMI 2.0 support. 10x blades in 7U SuperBlade enclosure scales to best density (180x GPUs and 120x CPUs) and performance (256 TFLOPS theoretical) per 42U rack.

· 2x GPU SuperBlade (SBI-7127RG-E) – Supports 2x GPUs, dual Intel Xeon E5-2600 series processors, up to 256GB memory, 1x SSD or 1x SATA-DOM, and onboard BMC for IPMI 2.0 support. 10x blades in 7U SuperBlade enclosure offers high density (120x GPUs and 120x CPUs) and performance (178 TFLOPS theoretical) per 42U rack.

Experience Supermicro’s latest High Performance Computing solutions this week at Supercomputing 2013 in Denver, Colorado in booth #3132 at the Colorado Convention Center, November 18th – 22nd. Supermicro server solutions optimized for NVIDIA Tesla K40 GPU accelerators are also on exhibit at NVIDIA’s SC13 booth #613. For information on Supermicro’s complete line of GPU enabled supercomputing platforms visit www.supermicro.com/GPU.

For complete information on SuperServer solutions from Supermicro visit www.supermicro.com.

About Super Micro Computer Inc.

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green®” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

—–

Source: Super Micro Computer Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire