Dell Revs Up HPC Strategy with New Products and Market Focus

By Michael Feldman

September 9, 2010

In the HPC market, Dell has established itself as the number three system vendor, trailing only its larger competitors, HP and IBM. Known for offering no-frills performance servers at reasonable prices, Dell has garnered a particularly strong following in higher education and government labs, especially for small and mid-sized clusters. But a recent spate of purpose-built HPC products from the company points to a subtle shift in Dell’s high performance computing strategy.

During a recent conversation with Donnie Bell, senior manager of HPC Solutions in the Dell Product Group, and Tim Carroll, Dell’s HPC Global Lead, the two reps outlined how the company is treating HPC more as a distinct opportunity, and less like an extension of their enterprise business. The result is that Dell has developed more HPC-specific products and is backing that up with more system testing and validation prior to deployment. “It’s not just about throwing gear out there,” explained Bell. “It’s got to be the gear that they want, put together in the solution they want.”

The shift in strategy has come about over the last three years. Attracted by the bullish HPC market (or at least bullish forecasts thereof) and a seemingly untapped demand for high performance computing, Dell is focusing particularly on the so-called “missing middle,” a term the Council on Competitiveness came up with to identify the potentially large group of unserved users between entry-level and high-end HPC practitioners. “That’s the market that Michael [Dell] said we’re going to invest in,” said Bell.

Of course, what this class of users ultimately wants are turnkey systems that are as easy to use as their desktop systems and don’t require an advanced degree in high performance computing in order to maintain. So far this is beyond the reach of Dell, as well as any of its competitors. Making HPC clusters act like appliances is still the stuff of fantasy.

Where Dell is staking out new ground is in its product mix, which now includes a range of HPC-centric offerings. It wasn’t too long ago that the PowerEdge 1950 was the workhorse server for Dell’s HPC customers. For all intents and purposes, though, the 1950 was an enterprise server pressed into HPC service by necessity. Today Dell offers servers and blades aimed specifically at the performance sector, including the latest HPC-friendly gear: the PowerEdge 6100, M610x, C410x.

The C6100 is the company’s new HPC workhorse, an ultra-dense rackmount server that encapsulates four dual-socket nodes in a 2U form factor. It offers twice the density of an average dual-socket server and is even 20 percent denser than blades. Dell accomplished this feat by sharing the internal infrastructure: power supply, fans and backplane. You can service the nodes individually, and the hard disk drives (either 2.5″ or 3.5″) are hot-pluggable.

The C6100 is available with either Intel “Nehalem” 5500 or “Westmere” 5600 processors. Outfitted with 6-core Westmere CPUs, a single 2U box will deliver 48 cores. Because of its density and power, it’s specifically targeted as a building block for HPC clusters, but can also be used for general Web and cloud installations, where maximum performance is a priority. The C6100 has been shipping since the spring.

Dell recently announced C6100 deployments at the University of Colorado and University of Kentucky. Both systems will be supporting a range of scientific research at those institutions, including climatology, genomics, energy studies, pharmaceutical design, and physics, among others. The Colorado system is big enough to warrant the number 31 spot on the TOP500 list.

The brand new PowerEdge C6105 is the AMD counterpart to the C6100, offering Opteron “Lisbon” 4000 series processors in the same dense 2U enclosure. The 4000 Opterons are the less performant, lower power siblings to the Opteron 6000 processors, so the C6105 is geared more toward the large-scale cloud and Web 2.0 deployments than strict HPC. Availability is still a couple of months away.

On the blade side, the dual-socket PowerEdge M610x is an M610 variant for HPC that includes two x16 PCIe Gen2 slots and two I/O mezzanine cards. (The M610, by the way, is the building block for the newly announced 300 teraflop Lonestar super at TACC.) The PCIe slots on the M610x lets you install a single NVIDIA Tesla (Fermi-class) GPU card, if you want to accelerate data-parallel workloads; or perhaps a Fusion-io ioDrive Duo, if you’re looking for ultra-fast storage. The two mezzanine slots makes dual-rail InfiniBand a possibility, but you also can slot in Ethernet, Fibre Channel, or whatever networking combo you might desire. Like the C6100, the M610x is available with quad-core Xeon 5500s or six-core Xeon 5600s.

Because of the extra connectivity options, the M610x is a full-height blade (unlike its half-height M610 sibling), but still fits neatly in Dell’s M1000e blade chassis. The new blade was announced in June and has been shipping for a couple of months.

If a single GPU per server isn’t enough, Dell is now offering the PowerEdge C410x, a CPU-less 3U box that can house up to 16 Tesla M2050 GPU modules. As of today, that represents the biggest commercial GPGPU box on the market. At the maximum 16-GPU configuration, the C410x can deliver 16.5 teraflops of raw performance.

Of course, tapping into that requires a CPU host, so the C410x conveniently allows connectivity for up to 8 servers. The idea here is to decouple the CPU and GPU so that a customer can mix and match the processor ratios as needed by the application. This could be especially useful in those cases that can take advantage of a high GPU:CPU ratio, like some seismic and physics codes, or where the work is such that the optimal processor ratio varies from one application to another.

If you’re getting the idea that Dell is a little GPU-happy these days, you’re right. According to Bell, the company believes a lot of their HPC customers will be opting for GPU acceleration now, as they chase ever denser performance. Even the new Dell Precision T7500 has a slot for a Tesla C2050 GPU, for those CUDA desktop apps that need a few hundred extra gigaflops to really shine.

“Quantitatively, there are so many more thousands of researchers doing their work on desktops,” said Dell’s Tim Carroll. “But it’s only a matter of time before those people are performing their research on a server somewhere, whether it’s their own, the institution’s, or in the cloud.”

Whether Dell’s new HPC investment yields big dividends is difficult to gauge. Because of the sharp downturn in the global economy in the last couple of years, IT spending has dipped considerably, although less so for HPC. According to Carroll, though, Dell’s HPC business is “seeing growth across the board,” adding that the market seems to be really breaking loose over the last three to four months.

The latest IDC numbers for 2009, which splits out HPC system revenue by vendor, has Dell with a 12.7 percent market share. That’s about half that of IBM’s share at 29.6 percent and HP’s at 28.2. But for mid-sized (departmental) systems, Dell is at 29.8 percent, edged out only by HP at 35.6 percent. That’s a good starting place, especially considering that the size of the HPC pie is forecast to start growing again now that the recession seems to be easing.

Despite the evolution in strategy, Dell still relies on partnerships with vendors like Platform Computing and Terascala to fill in the cluster management and HPC storage pieces of their solution, respectively. And even though the cluster maker is now designing purpose-built HPC systems, it is doing so to fulfill established market demand, rather than for the sake of invention. Contrast that with former HPC maker Sun Microsystems, and its enthusiasm for building exotic hardware, like 3,456-port InfiniBand switches and proximity communication chips.

Dell’s much more conservative innovation strategy is designed to serve the large sweet spot in the middle of the performance market, relying on the acceleration of HPC demand to drive revenue. According to Carroll, the company is still fundamentally about delivering open standards-based commodity clusters, adding, “we want HPC to be widespread and we want to be the ones who deliver that.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This