Decoupling HPC From the Datacenter

By Michael Feldman

December 4, 2008

The democratization of HPC is unlikely to happen if every company and institution is forced to build and maintain multi-million dollar datacenters to house supercomputers. Power, cooling and space constraints as well as a shortage of system administration expertise will limit the spread of HPC datacenters.

With the concentration of computing power into blades and ever-smaller form factors, building a large datacenter has become an adventure in creative plumbing. The new 95,000 square-foot facility under construction at the University of Illinois that will house the multi-petaflop ‘Blue Waters’ supercomputer in 2011 is expected to cost $72.5 million (compared to the $194.4 million for the super itself). Add in lifetime power and cooling for what is certain to be a multi-megawatt system, and it’s reasonable to project the facility plus operational costs will approach the original outlay for the hardware itself.

For legacy datacenters that can’t expand, especially those in urban areas, the challenge is to upgrade with hardware that fits into the existing space but doesn’t overload the energy and cooling capacity of the building. And for workstation-bound users who would like to move into the HPC realm, but don’t have a datacenter and have no plans to build one, the problem is even more obvious.

Are there alternatives? There are two relatively recent developments that could free HPC users from their datacenter habit: personal supercomputing and cloud computing.

I realize that using the cloud to alleviate the datacenter problem seems counter-intuitive. Obviously cloud computing requires datacenters too. You’re just pushing the problem somwhere else. The idea here is to get rid of on-site facilities. The big advantage is that ultra-scale datacenters can be (and often are) located where power, cooling and real estate are not at a premium, and can use economies of scale to further lower costs.

For example, Google, Microsoft and Yahoo have set up shop along the Columbia River in Oregon to tap the cheap hydro-electric power in the area. To serve its expanding cloud services, Amazon recently announced it was building three new facilities along the Columbia, along with its own 10-megawatt power substation. Google is even considering floating datacenters offshore that could be powered and cooled by the differential in ocean temperatures.

The advantages of computing in the cloud are obvious. Not only can you ditch the local datacenter, but the supercomputer as well, along with all the associated administration and maintenance costs of the hardware and system software. At the same time, you only pay for the computing you use and can scale your problem up (or down) as required.

The disadvantages are just as numerous and are well outlined in a recent article by LSU’s Thomas Sterling and Dylan Stark. In a nutshell, there are classes of HPC apps that don’t map well to the cloud as it exists today, either because of limitations in the cloud infrastructure or data security issues. The former has to do with the deleterious effects of virtualization and loosely-coupled clusters on performance, especially for highly-tuned and tightly-coupled HPC applications. As far as data security goes, well let’s just say Los Alamos won’t be doing nuclear weapons simulations on Amazon’s EC2 anytime soon.

But even the authors seem to agree that for many capacity HPC applications, like data analysis and visualization, the cloud paradigm offers a lot more flexibility than home-grown set-ups. And this model will be especially advantageous for smaller organizations and groups that have a hard time justifying a datacenter based on peak computing requirements.

A handful of HPC services already exist. Sun’s Grid Compute Utility, IBM’s Computing on Demand and Interactive Supercomputing’s Star-P On-Demand have been available for some time. The MathWorks and Wolfram Research recently incorporated cloud computing support into MATLAB and Mathematica, respectively. And this week, Univa UD launched an HPC virtualization capability that uses Amazon EC2. I expect to see a raft of new HPC cloud offerings in 2009.

Moving back down to Earth, the other potential datacenter killer is the personal supercomputer (PSC), which can inhabit the desktop, deskside or office closet. The current generation of PSCs is largely based on GPUs, which can now provide multi-teraflop acceleration. These machines were much in evidence at SC08, thanks in large part to the introduction of NVIDIA Tesla-equipped systems.

Of course, we’ve seen these personal supers come and go. Just a few years ago, Tyan Computer and Orion Multisystems came out with deskside cluster machines. But these sub-teraflop machines never caught on.

The new crop of GPU-accelerated machines seem more permanent to me. For one thing, they’re more powerful. At 4 teraflops (single precision) they’ve got some serious performance to offer. Plus, with CUDA, OpenCL, and a host of other software that is quickly becoming available from third-party tool makers, it looks like GPU computing has quickly established a new niche in the HPC ecosystem. With big-name players like Cray, Dell and Penguin Computing offering PSCs (with both Linux and Windows environments), there is a much better chance that these machines will endure.

Non-GPU PSCs are possible too. SiCortex already offers its own MIPS-based 72-core desktop system, although it’s mainly positioned as a development machine for the company’s larger clusters. If newcomer Convey Computer decided to shrink-wrap its new FPGA-based “hybrid core” server into a deskside or even desktop system, that could have the makings of a very interesting HPC system for personal use. For those of you who want to stick with vanilla x86 boxes, it will soon be possible to build personal multi-teraflop machines from Intel’s upcoming Intel Nehalem processors. Further down the road, the manycore Larrabee processor — or derivatives thereof — should provide a natural computing engine for desktop teraflopping.

So which model will prevail? Here’s one possible scenario: Desktop, deskside, and office systems will eat away the low and middle end of the market from below, while HPC applications requiring really large-scale parallelism will move into the cloud. For capability supercomputing applications, perhaps clouds will emerge designed specifically for high-end HPC. It’s not too hard to imagine the NSF’s TeraGrid and the European Commission’s DEISA (Distributed European Infrastructure for Supercomputing Applications) supporting cloud services targeted for supercomputing. The U.S. DOE might develop complementary clouds for its user community.

To the extent datacenter issues inhibit HPC adoption, clouds and PSCs will look ever more attractive. I anticipate a lot of experimentation in both areas in the upcoming year.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This