Penguin Joins Microserver ARMs Race

By Michael Feldman

October 18, 2012

Penguin Computing has launched its first ARM-based server platform. Known as the UDX1, the Penguin box is based on Calxeda’s latest ARM server chip, and is aimed at cloud computing, Web hosting, and, especially, data analytics – UD stands for Ultimate Data. The move puts Penguin into the front ranks of computer makers who are testing the waters for the burgeoning microserver market.

Although Penguin is best known for its HPC cluster offerings, it also sells into the enterprise space, from which it currently collects half its revenue. With established customers like Digg and Yelp, the company is looking to expand its footprint even further in the commercial arena. One of the ways it intends to do that is via the “big data” market, an application domain that spans genomic sequencing, risk analysis for stock portfolios, retail analytics and everything in between. Conveniently that encompasses the company’s HPC and enterprise customer bases.


The idea behind the UDX1 is to offer a less costly and more energy-efficient platform for these data-intensive applications. In general, x86 Xeon and Opteron servers offer more computational power than needed for applications that tend to be I/O bound. Therefore, rejiggering the compute-I/O balance by cutting back on thread/core performance can, at least in theory, offer a much more efficient solution.

That’s the premise of the microserver architecture, which uses less performant, but much lower power processors, such as ARM SoCs and low-power Intel Xeons and Atoms, to drive these throughput applications. In Penguin’s case, the UDX1 uses Calxeda’s latest EnergyCore ECX-1000 ARM server SoC, a quad-core chip that tops out at 5 watts. Each 4U enclosure houses up to 12 Calxeda modules, each holding four of those SoCs.

Note that the current crop of Calxeda server chips are based on 32-bit ARM, so there is that annoying limitation of a 4 GB memory reach. But for Hadoop-type workloads that can slice up datasets into bite-sized chunks, and scale out appropriately, this is a manageable problem.

Since each ARM chip comprises a complete server node, the UDX1 chassis offers 48 servers, in aggregate, (so 192 cores). Each node can hook into 4GB of DRAM and 36 1GB storage drives. Network switching is provided in the form of an on-chip network fabric supporting 10GbE connectivity between nodes, obviating the need for an external switch. In addition to on-chip Ethernet, the SoC includes integrated controllers for memory, PCIe, and SATA drives, as well as system management logic.

Since each of the servers runs 5 watts at full load, the whole chassis draws only 240 watts. Not bad for 192 cores. Obviously these are not Xeon cores; the ECX-1000 chip tops out at 1.4 GHz, which is less than half the speed of a top-end x86 server CPU. But in its intended space of divide-and-conquer-computing, there are a lot less wasted cycles waiting for I/O to catch up. At just a little over a watt per thread, energy-efficiency is an order of magnitude better than conventional server platforms.

According to Arend Dittmer, Penguin’s director of product marketing, a fully-populated UXD1 chassis will run about $30-35K. He says they already have a trio of orders for the new platform: one from a financial services firm, and the other two from national labs – all for data analytics work. At this point, the systems are being targeted for experimentation, rather than production, as customers kick the tires to see how well the Penguin box works under their analytics loads.

While the volume market for such microservers is going to be in the commercial space, Dittmer sees such systems filling a comfortable niche in HPC shops. He says, for mainstream science computation, where FLOPS are king, this is not the right platform (and doesn’t try to be). But since there is a finite amount of power and real estate in a datacenter, it makes sense to offload the data analytics work of science to more efficient hardware like the UXD1.

Penguin is not the only server maker utilizing Calxeda silicon. UK-based Boston Limited offers a very similar system to the UXD1, which they call Viridis. The Boston box is a 2U chassis that houses up to 48 Calxeda nodes and is aimed at essentially the same application space that Penguin is targeting. According to David Power, Boston’s Head of HPC, they have a 36-bay, 4U platform in the works, based on the same Calxeda SoCs.

Both vendors are already looking ahead to Calxeda’s plans for its 64-bit ARM SoC, which the company has code-named “Lago.” No one has committed to a date, but it’s reasonable to think that these chips should start to appear in the 2014 timeframe, with server implementations to follow shortly thereafter.

By that time, Penguin and Boston should have plenty of company. HP has been flirting with Calxeda for some time with its Project Moonshot development platform, but opted to go with Intel Atom CPUs for its initial microserver line. Dell has been dipping its toes into the microserver space as well, but gave the nod to Marvell’s quad-core Armada XP 78460 chip. IBM has yet to choose sides, but if these initial microserver platforms start to gain traction, you can bet Big Blue will figure out a way to get into the game.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire