Supermicro to Showcase Latest Innovations at CeBIT 2014

March 10, 2014

HANNOVER, Germany, March 10 — Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing is exhibiting its latest innovations in computing technology addressing a diverse range of workloads this week at CeBIT 2014 in Hannover, Germany. As demand grows for greater energy efficiency in lightweight, scale-out workloads for the Enterprise, Data Centers and Cloud, Supermicro is leading the industry with new server platforms optimized for low power consumption and ultra high density with support for Intel Atom C2000 and Xeon EP single and dual processor families. Supermicro will also spotlight its innovative Atom based MicroBlade platform at Intel’s CeBIT 2014 OEM Showcase as the model platform for emerging Microserver markets.

“At CeBIT 2014 we exhibit our latest server innovations which lead the industry in energy efficiency, density, and manageability for maximized performance per watt, per dollar, per square foot,” said Charles Liang, President and CEO of Supermicro. “Our new Intel Atom based platforms including the 6U 112-Node MicroBlade and 3U 24-Node MicroCloud are defining the future of green computing with solutions that support a wide range of workloads in Enterprise, extreme scale-out Data Center, Cloud and SMB applications. Additionally with next generation platforms supporting NVMe, 12Gb/s SAS3 and native 10GbE/40GbE, Supermicro offers the world’s most extensive range of advanced power conserving server, storage and networking solutions on the market.”

“Intel delivers a broad set of technologies to serve a variety of workloads most efficiently.  From low-power Intel Atom SoCs to high-performance Intel Xeon processors, these technologies allow our customers to offer highly customized solutions for end-users,” said Shannon Poulin, Intel vice-president and general manager of Data Center Group marketing. “Supermicro is taking maximum advantage of these innovations to offer their customers compelling solutions optimized for today’s diverse set of application workloads, space and budgetary requirements.”

The 6U 112-node MicroBlade is an extreme-density, ultra energy-efficient micro server system featuring ultra low power Intel Atom C2000 series SoC processors (up to 8-cores). This modular Blade architecture maximizes rack utilization with 112 independent power-conserving nodes (as low as 10W each) enabling up to 784 servers per 42U rack. The MicroBlade enclosure incorporates dual Chassis Management Modules (CMM) and up to four Ethernet switch modules. The switch modules, Intel Ethernet Microserver Switch Module FM5224, were co-developed by Intel and Supermicro and utilize the Intel Ethernet Switch FM5224 which offers advanced features such as 400nS cut-through latency, advanced load balancing and network overlay tunneling support. The FM5224 switch module features SDN functionality and includes an Intel Atom C2000 control plane processor and can support up to 2x 40Gb/s QSFP or 8x 10Gb/s SFP+ uplinks and 56x 2.5Gb/s downlinks per module, reducing cabling by 99%. Up to eight hot-swappable redundant (N+1 or N+N) 1600W Platinum-Level high-efficiency (95%) digital power supplies and heavy duty cooling fans are also integrated into the rear of the enclosure. This new innovative server targets Cloud, collocation, dedicated hosting, Web front end, video streaming, CDN, download service, and Social Networking applications. Performance oriented UP and DP configurations supporting Intel Xeon processors will be available in the next few months.

The new energy-efficient 3U MicroCloud (SYS-5038MA-H24TRF) features 24x nodes in 12x hot-swappable trays, each node supporting an Intel Atom C2750 (8-Core) processor, 32GB VLP DDR3 UDIMM, 2x 2.5″ SATA3 (6Gb/s) HDD/SSDs and dual GbE LAN.

For mission critical, data intensive Enterprise applications the new 4U 4-Way 96 DIMM SuperServer (SYS-4048B-TRFT) supports quad Intel Xeon processor E7-8800/4800 v2 (155 watt, 15-Core) processors, up to 6TB DDR3 1600MHz ECC R/LRDIMMs, up to 48x 2.5″ hot-swap HDD/SSDs, 12Gb/s SAS3, 11x PCI-E 3.0 slots, dual 10GBase-T ports plus 1x dedicated LAN port for IPMI 2.0 remote monitoring.

For extreme HPC applications, the new 2-node 4U FatTwin (SYS-F647G2-FT+) supports two ultra high performance compute nodes, each node supporting dual Intel Xeon E5-2600 v2 processors (up to 130W TDP), 6x Intel Xeon Phi Coprocessors and up to 1TB ECC DDR3, up to 1866MHz in 16x DIMM slots.

Additional Advanced Server, Storage and Networking Highlights at CeBIT 2014:

  • 2U TwinPro/TwinPro² – High-efficiency 2-node TwinPro (SYS-2027PR-DTR) and high density 4-node TwinPro² (SYS-2027PR-HTR). Each node supports dual Intel Xeon E5-2600 v2 processors. The 2-node 2U TwinPro accommodates a NVIDIA Tesla GPU accelerator and two additional add on cards per node. The systems feature up to 1TB in 16x DIMMs, SAS 3.0 12Gb/s support, NVMe optimized PCI-E SSD interface, additional PCI-E expansion slots, 10GbE and FDR (56GbE) InfiniBand options for maximized I/O
  • New 4-Way Multi-Processor (MP) Solutions – 1U, 2U (SYS-8027R-TRF+), 4U/Tower SuperServer platforms supporting latest quad Intel Xeon Processor E5-4600 v2 (12-Core) family
  • 4U 8x GPU/Xeon Phi – Intel Xeon processor E5-2600 v2, 1.5TB in 24x DIMMs, up to 48x 2.5″ hot-swap HDD/SSD bay, extreme parallel processing power (SYS-4027GR-TR)
  • SAS3 12Gb/s Solutions – Low-latency 12Gb/s performance 1U (SYS-1027R-WC1RT) and 2U w/3x SAS3 HDD Controllers in IT Mode (LSI 3008) providing 24x lanes of 12Gb/s (8x HDD per controller) (SSG-2027R-AR24NV NV-DIMM support)
  • New Cluster-in-a-Box (CiB) Storage Solutions – 3U CiB Storage Server (SSG-6037B-CIB032), 2U CiB Storage Server(SSG-2027B-CIB020H) certified and pre-installed with Windows Storage Server 2012 R2 Standard Edition
  • Memory Channel Storage Solutions – 1U 2x GPU/Xeon Phi SuperServer featuring low-latency application acceleration with persistent NAND flash-based storage in the memory channel (SYS-1027GR-TRFT+)
  • 4U FatTwin – Power Saving (16%), 8x hot-swap nodes, front I/O (SYS-F617R3-FT), GPU/Xeon Phi for HPC, 4x hot-plug nodes, 12x GPU/Xeon Phi (3x per node) (SYS-F627G2-FT+), Hadoop Big/Data, 4x hot-plug nodes each supporting dual Intel Xeon E5-2600 v2 processors, 12x Fixed 3.5″ HDDs (SYS-F617H6-FTPTL+)
  • 7U SuperBlade Solutions – TwinBlade (SBI-7227R-T2 and SBA-7222G-T2), 64-core AMD (G34) 4-way MP Blade (SBA-7142G-T4), GPU Blade (SBI-7127RG), PCI-E 3.0 x16 Expansion Blade (SBI-7127R-SH)

Visit Supermicro at CeBIT 2014 in Hannover, Germany, March 10th through 14th. To see the MicroBlade, 4U 4-Way and 12x Intel Xeon Phi FatTwin visit Supermicro at Intel’s Nord LB/Forum (Pavilion 37). Supermicro’s main exhibit booth is located at Hannover Messe, Hall 2, Stand B49 (B56). For more information on Supermicro’s complete line of high performance, high-efficiency server, storage and networking solutions, visit www.supermicro.com.

About Super Micro Computer, Inc.

Supermicro, the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

—–

Source: Super Micro Computer, Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire