Dell Aims PowerEdge C-Series Platform for HPC and Beyond

By Tiffany Trader

June 30, 2015

Dell has positioned its latest PowerEdge C-series platform to meet the needs of both traditional HPC and the hyperscale market. The recently hatched PowerEdge C6320 is outfitted with the latest generation Intel Xeon E5-2600 v3 processors, providing up to 18 cores per socket (144 cores per 2U chassis), up to 512GB of DDR4 memory and up to 72TB of flexible local storage.

HPCwire spoke with Brian Payne, executive director of Dell Server Solutions, to explore how the new PowerEdge C6320 fits in with Dell’s broader portfolio and approach to the widening HPC space.

With two Intel Xeon E5- 2699 processors, the new server offers a 2x performance improvement on the Linpack benchmark, delivering 999 gigaflops compared with 498 gigaflops from the previous generation PowerEdge C6220 (outfitted with Xeon E5-2697 CPUs). The C6320 also achieved a 45 percent improvement on the SPECint_rate benchmark and up to 28 percent better power efficiency on the Spec_Power benchmark.

The PowerEdge C6320 employs a “4in2U” design, meaning it has four independent server nodes in a 2U chassis, which offers a density that exceeds that of traditional rack servers, and is twice as dense as a 1U server, according to the company. “It also provides an interesting and unique balance of memory and storage and connectivity options,” Payne noted.

In the HPC sphere, Dells sees the C4130 as addressing pain points like scarce datacenter space, delivering double the density from a traditional rack server, allowing customers to scale more compute nodes per rack. Many of the datacenters in the HPC space have been engineered to take advantage of that density, meaning that they have the requisite power and cooling infrastructure in place, said Payne.

Beyond addressing the density, Dell recognizes that the HPC space is changing to become more heterogeneous, and there is burgeoning demand for acceleration technology coming from an ever-widening user group that includes technical computing, scientific research, financial services, oil and gas exploration, and medical imaging.

Customers with problems from these and other domains that lend themselves to being solved more efficiently by GPGPUs and Xeon Phi have the option to pair the PowerEdge C6320 with the accelerator-optimized PowerEdge C4130. Introduced back in Q4 of 2014, the PowerEdge C4130 is a 1U, 2-socket server capable of supporting up to four full-powered GPUs or Xeon Phis.

Dell says its PowerEdge C4130 offers 33 percent better GPU/accelerator density than its closest competitors and 400 percent more PCIe GPU/accelerators per processor per rack than a comparable HP system. A single 1U server delivers 7.2 teraflops and has a performance/watt ratio of up to 4.17 gigaflops per watt.

Dell works closely with the major coprocessor suppliers to align roadmaps and ensure that future developments can be deployed in a timely manner. Currently, the C4130 supports NVIDIA’s Tesla K40 and K80 parts; Intel Phi 7120P, 5110P and 3120P SKUs; and AMD’s Firepro line, including the S9150 and S9100 graphics cards.

Advanced seismic data processing is one of the segments benefiting from accelerator technology. Dell has already scored a win in this market by delivering a combination of the 4in2U form factor and the C4130 server to a customer in the undersea oil and gas space. The unnamed business was able to double compute capacity with 50 percent fewer servers, supporting new proprietary analytics, according to Dell.

Dell’s marquis customer in the academic space is the University of California San Diego, which relied on the new PowerEdge C-series for its Comet cluster. The new petascale supercomputer has been described as “supercomputing for the 99 percent” because it will serve the large number of researchers who don’t have the resources to build their own cluster. Deployed by the San Diego Supercomputer Center (SDSC), Comet leverages 27 racks of PowerEdge C6320, totaling 1,944 nodes or 46,656 cores, a five-fold increase in compute capacity compared with SDSC’s previous system.

Payne noted that SDSC was able to get this cluster powered up and starting to run test workloads in under two weeks, months ahead of Dell’s general availability, which begins next month. Payne pointed to the packaging of the platform as a key enabler. “Instead of racking up four discrete rack servers, having those in a single chassis simplifies that process and can help with the speed of deployment,” he said.

“Our goal is to democratize technology and help the [HPC] industry move forward to drive innovations and [discovery],” he stated. “The way we can do that is by driving standardization and by bringing down the marginal cost of compute – to increase their productivity and also engage with them to understand the nuances and challenges that they have and adapt to those. In the case of San Diego Supercomputing Center, they had a timeline that didn’t necessarily line up with our product general release timeline and we found a way to adapt and respond to their timing needs to fulfill the demand for this latest platform.”

Payne added that Dell is opening up market opportunities beyond high-performance computing. The PowerEdge C6320 along with its embedded management software will be used as a host platform for hyper-converged systems such as Dell Engineered Solutions for VMware EVO: RAIL and Dell’s XC Series of Web-scale Converged Appliances.

By targeting the hyper-converged market, Dell was able to design in a new capability in this product class, a management capability called iDRAC8 with Lifecycle Controller. The tool allows customers to rapidly deploy, monitor and update their infrastructure layer. Larger high-performance cluster users may have the means to build their own tools and capabilities. For everyone else, Dell is making this technology available in its PowerEdge C-Series line. Prior to that it had only been available in the mainstream PowerEdge lineup. For those that don’t need this capability, Dell can still deliver the baseline capabilities without the added cost or complexity burden.

“We are seeing more applications of high-performance computing in mainstream industry, outside the domain of traditional national labs, traditional universities,” said Payne, addressing the symbiosis that is occurring at the interplay of HPC, enterprise and big data. “Going into R&D departments, in oil and gas and other segments that are building out big systems, you see some big data problems being treated very similarly to the way high-performance computing problems are solved.

“You have to think about the skill set and the staff in the IT department that is responsible for deploying and administering this infrastructure, and many times that staff is hosting and supporting a diverse set of workloads for the company – from email to database and now high-performance computing as well as some Web technologies. These folks were trained and accustomed to using server OEM tools to manage the infrastructure and they rely on those versus building their own, now we have extended and given them something that they are familiar with that makes it easier for them to take on a high-performance computing project.”

The new server starts at roughly $16,600 and includes the chassis and four C6320 nodes (2x Xeon E5-2603 v3 CPUs, 2x8GB DDR4 memory, 1×2.5-inch 7200rpm 250 GB SATA, iDRAC8 Express, and 3-year warranty). More details, including networking options, are available on this product page.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Data Science the Fourth Pillar of the Scientific Method?

April 18, 2019

Nvidia CEO Jensen Huang revived a decade-old debate last month when he said that modern data science (AI plus HPC) has become the fourth pillar of the scientific method. While some disagree with the notion that statistic Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the General Chair of SC19 -- is an ACM Distinguished Scientist. Read more…

By HPCwire Editorial Team

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the Read more…

By HPCwire Editorial Team

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

AI and Enterprise Datacenters Boost HPC Server Revenues Past Expectations – Hyperion

April 9, 2019

Building on the big year of 2017 and spurred in part by the convergence of AI and HPC, global revenue for high performance servers jumped 15.6 percent last year Read more…

By Doug Black

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This