One box to rule them all, and in the network bind them
This week the IT industry exhaled its collectively held breath as Cisco finally announced its Unified Computing Solution (UCS). The announcement itself was pretty thin on any actual, you know, details. Part of this reflects the marketing approach that Cisco is taking with UCS: start at the CIO level, where the air is pretty rarefied, well over the heads of the various server, network and apps managers crouched defensively over their rice bowls. The presumption is that this is an effective way to dislodge its main server competition — stalwarts like IBM, HP and Dell.
Behind the marketing is a mostly enterprise play, but the company is hinting at an HPC angle for UCS. We’ll tell you what we know now, and how this might impact your high performance computing deployment plans.
First of all, what is UCS? Cisco’s Brian Schwartz, an engineer at Cisco’s Server Access Business, described it this way: “UCS is a next generation datacenter architecture that fuses computing, networking, storage access, and virtualization into a single system.” The architecture will be implemented in a product line that Cisco will be rolling out in the weeks and months ahead.
While Schwartz declined to delve into specifics about the makeup of the UCS server hardware (codenamed “California”), a report by Timothy Prickett Morgan at The Register shed some light on the inner workings of the upcoming machines. From what Morgan could glean from Dante Malagrino, director of engineering at Cisco’s server access and virtualization business unit, the physical heart of the system is the UCS 5100 series blade server chassis, a 6U form factor that mounts in a standard rack. The 5100 holds servers (Nehalem-based UCS B Series blades), the UCS 2100 fabric extenders, and the UCS 6100 Series Fabric Interconnect module. All the blades are oriented horizontally, and the server blades come either eight half-width blades or four full-width blades to a chassis. The fabric extenders, up to two per chassis, link the blades to the fabric interconnect.
Schwartz himself describes the UCS 6100 Fabric Interconnect switch as the “heart and brains of the system.” It implements the unified network fabric, and also runs the software that controls, manages, and monitors all the chassis and blade servers. The 6100 hooks the chassis together into a cluster in which each blade runs its own OS, has its own memory, and so on. Two redundant switches can manage 40 blade chassis in a single cluster for a total of 320 servers or about 2500 cores using the upcoming quad-core Nehalem EP chips. The chassis are connected via a lossless 10 Gb Ethernet fabric, and the 6100 supports unified storage access by allowing FCoE and end user Ethernet on the same device.
Virtualization is a big part of this solution, and Cisco has partnerships with VMware and Windows for ESX Server and Hyper-V hypervisors, and runs Windows and at least two flavors of Linux (SUSE and Red Hat). The switch itself is also virtualized, so that as virtual machine images move around the cluster, the network connections aren’t lost. Handy.
The 6100 also hosts the management software for the UCS, Cisco UCS Manager, which is built on the BladeLogic operating system that Cisco has licensed from BMC Software. This approach puts both network and server management into the network itself, and Cisco is very proud of its XML-based API that allows adventurous users and third-party developers to build higher level tools on top of the UCS management layer.
And here is where the company starts to talk about high performance computing. For example, for applications that want to live in really large compute grids — as in thousands of nodes — the XML API will provide the mechanism to manage these super-sized systems as a single entity. According to him, “literally anything you can do in our CLI and GUI, you can do in our XML API, and that’s very attractive to system management companies and people who might do things like job scheduling.” For example, third-party developers like Platform Computing could come in and employ the XML API to build higher levels of abstraction around user workload management and application-tailored deployment.
Schwartz cited an HPC use case in the financial services context where a UCS set-up could be used for front or back office support during the day, and then re-provisioned at night for high end analytics. Chip design companies that currently isolate their Electronic Design Automation (EDA) workloads from their business-side applications is another example where a unified computing model could make a lot of sense. In Cisco’s view, such a model prevents companies from building two siloed infrastructures to support different computational requirements and would allow them to run their infrastructure something like Amazon runs EC2.
Returning to the server hardware, the one feature Cisco did reveal this week that pertains to HPC is the memory expansion technology. The feature will be cooked into the blade motherboards and will provide for significantly more memory capacity per server, making it ideal for virtualization and memory-bound applications. Although Schwartz couldn’t provide any details ahead of the Intel Nehalem EP launch, which is expected at the end of the month, he did say that the technology will be “ideal for large data-intensive workloads,” adding that Cisco has been talking with a number of people under NDA who are very interested in these large memory footprint systems.
The impact of Cisco’s UCS product line in an enterprise or HPC setting remains to be seen. The other system vendors are predictably blasé about the announcement, even if they are privately preparing their own server announcements and grand unification schemes for the datacenter. The release of Intel’s Nehalem EP chip later this month promises to set the server launch machine back into high gear, as OEMs scramble for position. But this time around, Cisco will have a lot more on the line.