Michael Dell Talks HPC

By Michael Feldman

November 18, 2008

Dell founder and CEO Michael Dell delivered the keynote address at the Supercomputing conference this morning in Austin, Texas, offering his perspective on where high performance computing is headed. We caught up with Dell shortly before the conference to get a preview of the keynote and to ask him about some of the hot-button issues that are driving the HPC industry today.

HPCwire: We’re almost certainly going to be in a recession in the U.S. and perhaps much of the world for the immediate future. How do you think that changes the HPC market? Or does it?

Michael DellMichael Dell: You’ll likely see an impact on funding. The global economic challenge is affecting every sector of society and business. It will place an even greater premium on productivity and efficiency — doing more with less. The democratization of supercomputing might even be accelerated as researchers and scientists take advantage of standards-based platforms to share compute capacity.

It’s likely we will see some consolidation in the IT sector — so decisions being made today need to be considered carefully. Dell is well positioned — with $9 billion in the bank — to provide needed stability here.

We can’t forget that supercomputing drives our competitiveness. Without it our economies don’t grow and some of the world’s most pressing challenges won’t get solved. Problems like advancing fusion power for more affordable and accessible energy and developing nanofiltration techniques that remove pollutants from water. Today, just one in six people worldwide has access to clean water. We must continue to invest in supercomputing capacity.

HPCwire: After more than 20 years, high-performance computing is certainly an established market. Yet the conventional wisdom is that a lot of demand goes unserved. What do you think has been holding back more users from tapping into HPC?

Dell: For too long, supercomputing was about proprietary technology. As a result, it was also about high cost.

And there were those who wanted to maintain an air of exclusivity. You can trace that back to the days of specialized processors and proprietary operating systems like Illiac IV and Cray 1. Things got a little better in the 1980s and 1990s.

But the real changes have come in the past decade during which the supercomputing community has really embraced open-source and standards. That’s clear when you look at what’s happened on the TOP500.

It’s rewarding to see that play out in broader access. You now have far more engineers, scientists and researchers worldwide focused on solving society’s biggest problems, which are also computational challenges.

HPCwire: What developments and technologies are going to drive this next wave of HPC?

Dell: There’s a lot going on beyond IT that’s making an impact. That includes factors like the economy, growing demand for commercial cloud computing in developed and emerging countries, and technology-industry consolidation. The growing influence of gaming technology and the public-private partnerships in the HPC space also are playing a role.

With regard to technology, you will see demand for even higher-density, more energy-efficient servers. I just saw a compelling figure on this – in 2003, a 1,260 node cluster with three GigaHertz processors sustained just under 10 teraflops. Today, we can get to just under 11 teraflops with 155 servers and 2.6 GigaHertz. That’s a really incredible trend, and it will continue.

Processors will continue to increase in speed and power at rapid rates, extending beyond servers to workstations. This week we’ll announce that we’re offering a Precision workstation that delivers a full Tflop of processing power.

The fourth wave is about standardization moving throughout the HPC ecosystem, into networking, storage, interconnects, tools and middleware. Dell is — literally — the platform for this movement — the center of the datacenter. So we play a unique role in working with our broad base of partners to drive standards throughout the stack.

HPCwire: With the drive toward hardware commoditization and system software standardization, what kinds of things can cluster vendors do to differentiate their products these days?

Dell: Sure, we want to drive standards and make IT simpler for the HPC community, but that starts with a clear understanding that what they do — and that their IT needs — are inherently complex.

We know that standards aren’t solutions in and of themselves. For HPC, it’s about standards combined with customization, and services that span the high-performance computing ecosystem. While traditional HPC customers like large universities might require heavy customization, a smaller customer might prefer to buy a bundle online. Those customers can use Dell’s online configuration tool to architect and purchase their cluster.

At the system level, an example is what we’re doing with AMD’s new Shanghai processor, taking chip-level performance and increasing it with Dual HT link designs. We’re doubling the available bandwidth between two processors for up to 12 percent better performance.

HPCwire: How concerned are customers, especially HPC customers, about energy-efficient computing?

Very. Supercomputing customers have always been focused on energy-efficiency. This manifests itself in two ways. First, ensuring the systems they are buying are the most efficient — and we are proud to lead here with what are the greenest products in the industry. Today, our servers use about 25 percent less power than four years ago. Second, we’re working with customers to tune systems to their unique workloads and environment. These highly-tuned and customized systems are at the heart of many of the large cloud infrastructures being built.

HPCwire: Cloud-based services seem to be getting traction in the broader enterprise market. How do you think cloud computing will play out in the HPC segment?

You’ll see more clouds in the high-performance computing space, without a doubt, but HPC customers will continue to have distinct needs.

We’ve actually created a special division for this. Our Data Center Solutions group’s sole focus is to tailor solutions for hyperscale-cloud environments. The goal is to work with customers to customize architectures based on exactly what they need, and nothing they don’t.

The DCS team has taken a lot of our HPC know-how to developing and deploying commercial cloud platforms, including Microsoft’s Windows Azure, Facebook, and Salesforce.com. Without a doubt, our work with HPC has taught us a great deal and helped position us for success in the commercial cloud.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

“Lunch & Learn” to Explore the Growing Applications of Genomic Analytics

In the digital age of medicine, healthcare providers are rapidly transforming their approach to patient care. Traditional technologies are no longer sufficient to process vast quantities of medical data (including patient histories, treatment plans, diagnostic reports, and more), challenging organizations to invest in a new style of IT to enable faster and higher-quality care. Read more…

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan and will begin operation in fiscal 2018 (starts in April). A Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This