Dell Aims PowerEdge C-Series Platform for HPC and Beyond

By Tiffany Trader

June 30, 2015

Dell has positioned its latest PowerEdge C-series platform to meet the needs of both traditional HPC and the hyperscale market. The recently hatched PowerEdge C6320 is outfitted with the latest generation Intel Xeon E5-2600 v3 processors, providing up to 18 cores per socket (144 cores per 2U chassis), up to 512GB of DDR4 memory and up to 72TB of flexible local storage.

HPCwire spoke with Brian Payne, executive director of Dell Server Solutions, to explore how the new PowerEdge C6320 fits in with Dell’s broader portfolio and approach to the widening HPC space.

With two Intel Xeon E5- 2699 processors, the new server offers a 2x performance improvement on the Linpack benchmark, delivering 999 gigaflops compared with 498 gigaflops from the previous generation PowerEdge C6220 (outfitted with Xeon E5-2697 CPUs). The C6320 also achieved a 45 percent improvement on the SPECint_rate benchmark and up to 28 percent better power efficiency on the Spec_Power benchmark.

The PowerEdge C6320 employs a “4in2U” design, meaning it has four independent server nodes in a 2U chassis, which offers a density that exceeds that of traditional rack servers, and is twice as dense as a 1U server, according to the company. “It also provides an interesting and unique balance of memory and storage and connectivity options,” Payne noted.

In the HPC sphere, Dells sees the C4130 as addressing pain points like scarce datacenter space, delivering double the density from a traditional rack server, allowing customers to scale more compute nodes per rack. Many of the datacenters in the HPC space have been engineered to take advantage of that density, meaning that they have the requisite power and cooling infrastructure in place, said Payne.

Beyond addressing the density, Dell recognizes that the HPC space is changing to become more heterogeneous, and there is burgeoning demand for acceleration technology coming from an ever-widening user group that includes technical computing, scientific research, financial services, oil and gas exploration, and medical imaging.

Customers with problems from these and other domains that lend themselves to being solved more efficiently by GPGPUs and Xeon Phi have the option to pair the PowerEdge C6320 with the accelerator-optimized PowerEdge C4130. Introduced back in Q4 of 2014, the PowerEdge C4130 is a 1U, 2-socket server capable of supporting up to four full-powered GPUs or Xeon Phis.

Dell says its PowerEdge C4130 offers 33 percent better GPU/accelerator density than its closest competitors and 400 percent more PCIe GPU/accelerators per processor per rack than a comparable HP system. A single 1U server delivers 7.2 teraflops and has a performance/watt ratio of up to 4.17 gigaflops per watt.

Dell works closely with the major coprocessor suppliers to align roadmaps and ensure that future developments can be deployed in a timely manner. Currently, the C4130 supports NVIDIA’s Tesla K40 and K80 parts; Intel Phi 7120P, 5110P and 3120P SKUs; and AMD’s Firepro line, including the S9150 and S9100 graphics cards.

Advanced seismic data processing is one of the segments benefiting from accelerator technology. Dell has already scored a win in this market by delivering a combination of the 4in2U form factor and the C4130 server to a customer in the undersea oil and gas space. The unnamed business was able to double compute capacity with 50 percent fewer servers, supporting new proprietary analytics, according to Dell.

Dell’s marquis customer in the academic space is the University of California San Diego, which relied on the new PowerEdge C-series for its Comet cluster. The new petascale supercomputer has been described as “supercomputing for the 99 percent” because it will serve the large number of researchers who don’t have the resources to build their own cluster. Deployed by the San Diego Supercomputer Center (SDSC), Comet leverages 27 racks of PowerEdge C6320, totaling 1,944 nodes or 46,656 cores, a five-fold increase in compute capacity compared with SDSC’s previous system.

Payne noted that SDSC was able to get this cluster powered up and starting to run test workloads in under two weeks, months ahead of Dell’s general availability, which begins next month. Payne pointed to the packaging of the platform as a key enabler. “Instead of racking up four discrete rack servers, having those in a single chassis simplifies that process and can help with the speed of deployment,” he said.

“Our goal is to democratize technology and help the [HPC] industry move forward to drive innovations and [discovery],” he stated. “The way we can do that is by driving standardization and by bringing down the marginal cost of compute – to increase their productivity and also engage with them to understand the nuances and challenges that they have and adapt to those. In the case of San Diego Supercomputing Center, they had a timeline that didn’t necessarily line up with our product general release timeline and we found a way to adapt and respond to their timing needs to fulfill the demand for this latest platform.”

Payne added that Dell is opening up market opportunities beyond high-performance computing. The PowerEdge C6320 along with its embedded management software will be used as a host platform for hyper-converged systems such as Dell Engineered Solutions for VMware EVO: RAIL and Dell’s XC Series of Web-scale Converged Appliances.

By targeting the hyper-converged market, Dell was able to design in a new capability in this product class, a management capability called iDRAC8 with Lifecycle Controller. The tool allows customers to rapidly deploy, monitor and update their infrastructure layer. Larger high-performance cluster users may have the means to build their own tools and capabilities. For everyone else, Dell is making this technology available in its PowerEdge C-Series line. Prior to that it had only been available in the mainstream PowerEdge lineup. For those that don’t need this capability, Dell can still deliver the baseline capabilities without the added cost or complexity burden.

“We are seeing more applications of high-performance computing in mainstream industry, outside the domain of traditional national labs, traditional universities,” said Payne, addressing the symbiosis that is occurring at the interplay of HPC, enterprise and big data. “Going into R&D departments, in oil and gas and other segments that are building out big systems, you see some big data problems being treated very similarly to the way high-performance computing problems are solved.

“You have to think about the skill set and the staff in the IT department that is responsible for deploying and administering this infrastructure, and many times that staff is hosting and supporting a diverse set of workloads for the company – from email to database and now high-performance computing as well as some Web technologies. These folks were trained and accustomed to using server OEM tools to manage the infrastructure and they rely on those versus building their own, now we have extended and given them something that they are familiar with that makes it easier for them to take on a high-performance computing project.”

The new server starts at roughly $16,600 and includes the chassis and four C6320 nodes (2x Xeon E5-2603 v3 CPUs, 2x8GB DDR4 memory, 1×2.5-inch 7200rpm 250 GB SATA, iDRAC8 Express, and 3-year warranty). More details, including networking options, are available on this product page.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that d Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competition. This is the twelfth time that teams of university undergr Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Bailey Hutchison Convention Center and much of the surrounding Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC impact at SC18. Most noteworthy is that five of 13 CAAR applic Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

At SC18: GM, Boeing, Deere, BP Talk Enterprise HPC Strategies

November 9, 2018

SC18 in Dallas (Nov.11-16) will feature an impressive series of sessions focused on the enterprise HPC deployments at some of the largest industrial companies: Read more…

By Doug Black

SC 30th Anniversary Perennials 1988-2018

November 8, 2018

Many conferences try, fewer succeed. Thirty years ago, no one knew if the first SC would also be the last. Thirty years later, we know it’s the biggest annual Read more…

By Doug Black & Tiffany Trader

CEA’s Pick of ThunderX2-based Atos System Boosts Arm

November 8, 2018

Europe’s bet on Arm took another step forward today with selection of an Atos BullSequana X1310 system by CEA’s (French Alternative Energies and Atomic Ener Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This