Super Micro Exhibits Latest HPC Solutions at SC13

November 18, 2013

SAN JOSE, Calif., Nov. 18 — Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing, exhibits its latest high-performance computing (HPC) solutions at the Supercomputing 2013 (SC13) conference this week in Denver, Colorado. Highlighted at the show is Supermicro’s innovative high-density energy efficient Twin architecture and the launch of a new 4U FatTwin platform featuring two ultra high performance compute nodes, each supporting dual Intel Xeon E5-2600 v2 “Ivy Bridge” processors (up to 130W TDP) and up to six Intel Many Integrated Core (MIC) based Intel Xeon Phi Coprocessors. Also making their debut are new 2U TwinPro and TwinPro² SuperServers, the second generation Twin architecture from Supermicro featuring greater memory capacity up to 16x DIMMs, 12Gb/s SAS 3.0 support, NVMe optimized PCI-E SSD interface, additional PCI-E expansion slots, 10GbE and onboard QDR/FDR InfiniBand for maximized I/O and support for a full-length, double-width Xeon Phi coprocessor per node in the 2U TwinPro. Supermicro will also highlight the 4U 4-node FatTwin SuperServer which supports up to 3x Intel Xeon Phi 5110P coprocessors paired with dual Intel Xeon E5-2600 v2 processors.

This platform configured and deployed by Atipa Technologies supports the US Department of Energy’s (DOE) Environmental Molecular Sciences Laboratory (EMSL) supercomputer. The EMSL HPCS-4A on the DOE’s Pacific Northwest National Laboratory (PNNL) campus comprises 42x 42U rack clusters with 1,440 compute nodes and 2,880 Intel Xeon Phi coprocessors, providing 3.38 petaflops theoretical peak processing speed and 2.7 petabytes of usable storage. The HPCS-4A is expected to be ranked among the world’s top 20 fastest supercomputers.

Additional MIC based 1U, 2U, 3U, 4U SuperServer, FatTwin, SuperBlade, MicroBlade, MicroCloud, Hyper-Speed and 4-Way systems along with single/dual/multi processor (UP/DP/MP) motherboards which are the foundation of Supermicro’s server Building Block Solutions will also be on exhibit. High bandwidth 12Gb/s storage servers featuring LSI 3008 SAS3 controllers and 4U 72x hot-swap HDD bay Double-Sided Storage servers maximize I/O performance with Intel Cache Acceleration Software (CAS), offering dramatically enhanced performance for data-intensive HPC applications running on dedicated servers or virtual machines (VMs). Complete rack, network and server management software solutions round out the end-to-end server, network and storage solutions that can be configured and optimized to meet any scale supercomputing deployment.

“Supermicro’s Twin architecture delivers maximum performance per watt, per dollar, per square foot for many supercomputing deployments with its unique combination of high performance compute density coupled with energy saving technology and highest reliability,” said Charles Liang, President and CEO of Supermicro. “Indeed, we have invested a great amount of engineering effort to perfect our Twin server technology and now offer an unrivaled range of server solutions optimized for practically any scale application. With our new 4U 2-node FatTwin featuring dual Xeon CPUs and six Xeon Phi coprocessors per node, science, research and engineering programs can increase and accelerate project deliverables with maximized utilization of budget, resources and space.”

“The industry is moving from experimentation with heterogeneous computing to more efficient neo-heterogeneity which combines the benefits of heterogeneous hardware while still using the same, common and standard programming models for both the CPU and co-processor,” said Rajeeb Hazra, vice president and general manager of Technical Computing Group at Intel. “With solution providers such as Supermicro combining the high-performance Intel Xeon processors E5-2600 v2 with Intel Xeon Phi coprocessors in high-density, scalable server solutions, industry has the ideal pairing of technology to enable a neo-heterogeneous era. With a common underlying Intel architecture we provide developers with a rapid deployment environment for their programs along with enterprise class stability and reliability for the most demanding mission critical, compute intensive applications.”

Supermicro’s New HPC optimized supercomputing solutions on exhibit this week at SC’13 include:

  • 4U 12x Xeon Phi FatTwin (SYS-F647G2-FT+) – 2-node system featuring  6x Intel Xeon Phi Coprocessors per node with Front I/O, Redundant Platinum Level high efficiency power supplies and hot-swap cooling fans. Each node supports dual Intel Xeon E5-2600 v2 (up to 130W TDP) processors, 16x DDR3 Reg. ECC DIMMs, 10GbE onboard option and 8x 2.5″ hot-swap SAS/SATA/SSD bays.
  • 2U TwinPro (SYS-2027PR-DTR) / TwinPro² (SYS-2027PR-HTR) – Supermicro takes its 2U Twin architecture to the next level of performance, flexibility and expandability with the high efficiency 2-node TwinPro and high density 4-node TwinPro². Each node supports dual Intel Xeon E5-2600 v2 processors and the 2-node 2U TwinPro accommodates an Intel Xeon Phi Coprocessor with support for two additional add on cards per node. The systems feature greater memory capacity up to 16x DIMMs, 12Gb/s SAS 3.0 support, NVMe optimized PCI-E SSD interface, additional PCI-E expansion slots, 10GbE and onboard QDR/FDR InfiniBand for maximized I/O.

End-to-End Scalable Computing Solutions include:

  • 1U SuperServer (SYS-1027GR-TRT2) – supports 3x Intel Xeon Phi Coprocessors and dual Intel Xeon E5-2600 v2 series processors (up to 130W TDP).
  • 2U SuperServer (SYS-2027GR-TRFH) – supports 6x Intel Xeon Phi Coprocessors and dual Intel Xeon E5-2600 v2 series processors (up to 130W TDP).
  • 3U SuperServer (SYS-6037R-72RFT+) – supports 2x Intel Xeon Phi Coprocessors and dual Intel Xeon E5-2600 v2 series processors (up to 135W TDP).
  • SuperBlade (SBI-7127RG-E) – Supports 2x Intel Xeon Phi Coprocessors, dual Intel Xeon E5-2600 v2 series processors per Blade. 10x Blades in 7U SuperBlade enclosure offers high density 120x Intel Xeon Phi Coprocessors and 120x CPUs per 42U rack.
  • MicroBlade – 6U powerful and flexible microserver platform that features 28x hot-swappable micro blades  supporting 112x Intel Atom processor C2000 or 28x Intel Xeon processor E5-2600 v2 / 56x E5-1600 v2 family configurations with 2x HDDs/SSDs per node for high-performance applications.
  • MicroCloud – 3U in 12-node (SYS-5038ML-H12TRF), 8-node (SYS-5038ML-H8TRF) and coming 24-node (SYS-5038ML-H24TRF) configurations supporting independent hot-swappable nodes, Intel Xeon processor E3-1200 v3, 32GB memory, up to 2x 3.5″ or optional 4x 2.5″ HDDs and MicroLP expansion
  • 4U 4-Way (Quad CPU) SuperServer (SYS-4048B-TRFT) – Quad Intel Xeon processor, 96x DIMMs DDR3 Reg. ECC 1600MHz (up to 6TB), 11x PCI-E 3.0 slots, Dual 10Gb LAN, up to 48x hot-swap 2.5″ HDD/SSD bays.
  • 4U/Tower SuperWorkstation (SYS-7047GR-TRF / -TPRF) – Ultimate performance and expandability with support for up to 4x Intel Xeon Phi Coprocessors and dual Intel Xeon E5-2600 v2 series processors.
  • 2U Hyper-Speed Servers (SYS-6027AX-TRF-HFT3 / -72RF-HFT3) – Highly optimized solution for low-latency applications, featuring special firmware and hardware modifications and supporting accelerated dual Intel Xeon processor E5-2687W v2 CPUs.
  • 2U SAS3 12Gb/s SuperStorage Server (SSG-2027R-AR24NV) – 24x hot-swap 2.5″ SAS3 (12Gb/s)/SATA3 HDD/SSD bays with direct attached backplane. 3x LSI 3008 SAS3 controllers in IT mode providing 12Gb/s throughput, dual Intel Xeon processor E5-2600 v2 and support for NVDIMM technology.
  • 4U Double-Sided Storage (SSG-6047R-E1R72L) – 72x hot-swap 3.5″ HDD/SSD bays plus 2x 2.5″ internal HDD/SSD bays, dual Intel Xeon processor E5-2600 v2 support, up to 1TB ECC DDR3 in 16x DIMMs.
  • Uni-Processor (UP), Dual-Processor (DP) and Quad Multi-Processor (MP) Motherboards
  • Complete Rack, Network and Server Management Solutions – SuperRack enclosures in 14U and 42U, 10/1GbE Top-of-Rack Network Switches, Supermicro Server Management Utilities and full Integration Services.

Experience Supermicro’s complete range of High Performance Computing solutions at the Supercomputing 2013 Conference this week November 18th – 22nd in Denver, Colorado in the Colorado Convention Center, Booth #3132.

Supermicro solutions will also be exhibited at partner booths

  • Fusion-io (#3709) High Performance PCI-E Storage
  • Hitachi, Ltd. (#1710) SAS3 12Gb/s Storage Solutions
  • Intel (#2701) 42U SuperRack, FatTwin 150-nodes w/50x Xeon Phi 7120P, Xeon processor E5-2697 v2
  • LSI (#3616) SAS3 12Gb/s Storage Solutions

Visit www.supermicro.com for additional product information.

About Super Micro Computer, Inc.

Supermicro, the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

—–

Source: Super Micro Computer, Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was recently posted by CERN’s department of experimental physi Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to communities across the globe. As climate change is warming ocea Read more…

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical claims. A paper published on July 10 by researchers in the U. Read more…

Belt-Tightening in Store for Most Federal FY25 Science Budets

July 15, 2024

If it’s summer, it’s federal budgeting time, not to mention an election year as well. There’s an excellent summary of the curent state of FY25 efforts reported in AIP’s policy FYI: Science Policy News. Belt-tight Read more…

Peter Shor Wins IEEE 2025 Shannon Award

July 15, 2024

Peter Shor, the MIT mathematician whose ‘Shor’s algorithm’ sent shivers of fear through the encryption community and helped galvanize ongoing efforts to build quantum computers, has been named the 2025 winner of th Read more…

Weekly Wire Roundup: July 8-July 12, 2024

July 12, 2024

HPC news can get pretty sleepy in June and July, but this week saw a bump in activity midweek as Americans realized they still had work to do after the previous holiday weekend. The world outside the United States also s Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to com Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Generative AI to Account for 1.5% of World’s Power Consumption by 2029

July 8, 2024

Generative AI will take on a larger chunk of the world's power consumption to keep up with the hefty hardware requirements to run applications. "AI chips repres Read more…

US Senators Propose $32 Billion in Annual AI Spending, but Critics Remain Unconvinced

July 5, 2024

Senate leader, Chuck Schumer, and three colleagues want the US government to spend at least $32 billion annually by 2026 for non-defense related AI systems.  T Read more…

Point and Click HPC: High-Performance Desktops

July 3, 2024

Recently, an interesting paper appeared on Arvix called Use Cases for High-Performance Research Desktops. To be clear, the term desktop in this context does not Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Leading Solution Providers

Contributors

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire