Cisco Takes Its Shot at Grand Unification for the Datacenter

By John West and Michael Feldman

March 19, 2009

One box to rule them all, and in the network bind them

This week the IT industry exhaled its collectively held breath as Cisco finally announced its Unified Computing Solution (UCS). The announcement itself was pretty thin on any actual, you know, details. Part of this reflects the marketing approach that Cisco is taking with UCS: start at the CIO level, where the air is pretty rarefied, well over the heads of the various server, network and apps managers crouched defensively over their rice bowls. The presumption is that this is an effective way to dislodge its main server competition — stalwarts like IBM, HP and Dell.

Behind the marketing is a mostly enterprise play, but the company is hinting at an HPC angle for UCS. We’ll tell you what we know now, and how this might impact your high performance computing deployment plans.

First of all, what is UCS? Cisco’s Brian Schwartz, an engineer at Cisco’s Server Access Business, described it this way: “UCS is a next generation datacenter architecture that fuses computing, networking, storage access, and virtualization into a single system.” The architecture will be implemented in a product line that Cisco will be rolling out in the weeks and months ahead.

While Schwartz declined to delve into specifics about the makeup of the UCS server hardware (codenamed “California”), a report by Timothy Prickett Morgan at The Register shed some light on the inner workings of the upcoming machines. From what Morgan could glean from Dante Malagrino, director of engineering at Cisco’s server access and virtualization business unit, the physical heart of the system is the UCS 5100 series blade server chassis, a 6U form factor that mounts in a standard rack. The 5100 holds servers (Nehalem-based UCS B Series blades), the UCS 2100 fabric extenders, and the UCS 6100 Series Fabric Interconnect module. All the blades are oriented horizontally, and the server blades come either eight half-width blades or four full-width blades to a chassis. The fabric extenders, up to two per chassis, link the blades to the fabric interconnect.

Schwartz himself describes the UCS 6100 Fabric Interconnect switch as the “heart and brains of the system.” It implements the unified network fabric, and also runs the software that controls, manages, and monitors all the chassis and blade servers. The 6100 hooks the chassis together into a cluster in which each blade runs its own OS, has its own memory, and so on. Two redundant switches can manage 40 blade chassis in a single cluster for a total of 320 servers or about 2500 cores using the upcoming quad-core Nehalem EP chips. The chassis are connected via a lossless 10 Gb Ethernet fabric, and the 6100 supports unified storage access by allowing FCoE and end user Ethernet on the same device.

Virtualization is a big part of this solution, and Cisco has partnerships with VMware and Windows for ESX Server and Hyper-V hypervisors, and runs Windows and at least two flavors of Linux (SUSE and Red Hat). The switch itself is also virtualized, so that as virtual machine images move around the cluster, the network connections aren’t lost. Handy.

The 6100 also hosts the management software for the UCS, Cisco UCS Manager, which is built on the BladeLogic operating system that Cisco has licensed from BMC Software. This approach puts both network and server management into the network itself, and Cisco is very proud of its XML-based API that allows adventurous users and third-party developers to build higher level tools on top of the UCS management layer.

And here is where the company starts to talk about high performance computing. For example, for applications that want to live in really large compute grids — as in thousands of nodes — the XML API will provide the mechanism to manage these super-sized systems as a single entity. According to him, “literally anything you can do in our CLI and GUI, you can do in our XML API, and that’s very attractive to system management companies and people who might do things like job scheduling.” For example, third-party developers like Platform Computing could come in and employ the XML API to build higher levels of abstraction around user workload management and application-tailored deployment.

Schwartz cited an HPC use case in the financial services context where a UCS set-up could be used for front or back office support during the day, and then re-provisioned at night for high end analytics. Chip design companies that currently isolate their Electronic Design Automation (EDA) workloads from their business-side applications is another example where a unified computing model could make a lot of sense. In Cisco’s view, such a model prevents companies from building two siloed infrastructures to support different computational requirements and would allow them to run their infrastructure something like Amazon runs EC2.

Returning to the server hardware, the one feature Cisco did reveal this week that pertains to HPC is the memory expansion technology. The feature will be cooked into the blade motherboards and will provide for significantly more memory capacity per server, making it ideal for virtualization and memory-bound applications. Although Schwartz couldn’t provide any details ahead of the Intel Nehalem EP launch, which is expected at the end of the month, he did say that the technology will be “ideal for large data-intensive workloads,” adding that Cisco has been talking with a number of people under NDA who are very interested in these large memory footprint systems.

The impact of Cisco’s UCS product line in an enterprise or HPC setting remains to be seen. The other system vendors are predictably blasé about the announcement, even if they are privately preparing their own server announcements and grand unification schemes for the datacenter. The release of Intel’s Nehalem EP chip later this month promises to set the server launch machine back into high gear, as OEMs scramble for position. But this time around, Cisco will have a lot more on the line.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently king of accelerated computing) wins again, sweeping all nine Read more…

Highlights from GlobusWorld 2024: The Conference for Reimagining Research IT

June 11, 2024

The Globus user conference, now in its 22nd year, brought together over 180 researchers, system administrators, developers, and IT leaders from 55 top research computing centers, national labs, federal agencies, and univ Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst firm TechInsights. Nvidia's GPU shipments in 2023 grew by more Read more…

Weekly Wire Roundup: June 2-June 7, 2024

June 8, 2024

Computex (and Jensen Huang) gave us an extra day of news this week, compensating for last week's shorter, holiday-driven news cycle. On Sunday ahead of the official start of Computex, Nvidia's CEO Jensen Huang deliver Read more…

ASC24 Expert Perspective: Dongarra, Hoefler, Yong Lin

June 7, 2024

One of the great things about being at an ASC (Asia Supercomputer Community) cluster competition is getting the chance to interview various industry experts and learning more about the various challenges the students are Read more…

HPC and Climate: Coastal Hurricanes Around the World Are Intensifying Faster

June 6, 2024

Hurricanes are among the world's most destructive natural hazards. Their environment shapes their ability to deliver damage; conditions like warm ocean waters, guiding winds, and atmospheric moisture can all dictate stor Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Highlights from GlobusWorld 2024: The Conference for Reimagining Research IT

June 11, 2024

The Globus user conference, now in its 22nd year, brought together over 180 researchers, system administrators, developers, and IT leaders from 55 top research Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

ASC24 Expert Perspective: Dongarra, Hoefler, Yong Lin

June 7, 2024

One of the great things about being at an ASC (Asia Supercomputer Community) cluster competition is getting the chance to interview various industry experts and Read more…

HPC and Climate: Coastal Hurricanes Around the World Are Intensifying Faster

June 6, 2024

Hurricanes are among the world's most destructive natural hazards. Their environment shapes their ability to deliver damage; conditions like warm ocean waters, Read more…

ASC24: The Battle, The Apps, and The Competitors

June 5, 2024

The ASC24 (Asia Supercomputer Community) Student Cluster Competition was one for the ages. More than 350 university teams worked for months in the preliminary competition to earn one of the 25 final competition slots. The winning teams... Read more…

Computex 2024: Nvidia, AMD Push GPUs; Intel Revs Up x86 Power Efficiency

June 5, 2024

"The days of millions of GPU data centers are coming," said Nvidia CEO Jensen Huang during a keynote at Computex. Huang's predictions are becoming bolder and bo Read more…

Using AI and Robots to Advance Science

June 4, 2024

Even though we invented it, humans can be pretty bad at science. We need to eat and sleep, we sometimes let our emotions regulate our behavior, and our bodies a Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire