Decoupling HPC From the Datacenter

By Michael Feldman

December 4, 2008

The democratization of HPC is unlikely to happen if every company and institution is forced to build and maintain multi-million dollar datacenters to house supercomputers. Power, cooling and space constraints as well as a shortage of system administration expertise will limit the spread of HPC datacenters.

With the concentration of computing power into blades and ever-smaller form factors, building a large datacenter has become an adventure in creative plumbing. The new 95,000 square-foot facility under construction at the University of Illinois that will house the multi-petaflop ‘Blue Waters’ supercomputer in 2011 is expected to cost $72.5 million (compared to the $194.4 million for the super itself). Add in lifetime power and cooling for what is certain to be a multi-megawatt system, and it’s reasonable to project the facility plus operational costs will approach the original outlay for the hardware itself.

For legacy datacenters that can’t expand, especially those in urban areas, the challenge is to upgrade with hardware that fits into the existing space but doesn’t overload the energy and cooling capacity of the building. And for workstation-bound users who would like to move into the HPC realm, but don’t have a datacenter and have no plans to build one, the problem is even more obvious.

Are there alternatives? There are two relatively recent developments that could free HPC users from their datacenter habit: personal supercomputing and cloud computing.

I realize that using the cloud to alleviate the datacenter problem seems counter-intuitive. Obviously cloud computing requires datacenters too. You’re just pushing the problem somwhere else. The idea here is to get rid of on-site facilities. The big advantage is that ultra-scale datacenters can be (and often are) located where power, cooling and real estate are not at a premium, and can use economies of scale to further lower costs.

For example, Google, Microsoft and Yahoo have set up shop along the Columbia River in Oregon to tap the cheap hydro-electric power in the area. To serve its expanding cloud services, Amazon recently announced it was building three new facilities along the Columbia, along with its own 10-megawatt power substation. Google is even considering floating datacenters offshore that could be powered and cooled by the differential in ocean temperatures.

The advantages of computing in the cloud are obvious. Not only can you ditch the local datacenter, but the supercomputer as well, along with all the associated administration and maintenance costs of the hardware and system software. At the same time, you only pay for the computing you use and can scale your problem up (or down) as required.

The disadvantages are just as numerous and are well outlined in a recent article by LSU’s Thomas Sterling and Dylan Stark. In a nutshell, there are classes of HPC apps that don’t map well to the cloud as it exists today, either because of limitations in the cloud infrastructure or data security issues. The former has to do with the deleterious effects of virtualization and loosely-coupled clusters on performance, especially for highly-tuned and tightly-coupled HPC applications. As far as data security goes, well let’s just say Los Alamos won’t be doing nuclear weapons simulations on Amazon’s EC2 anytime soon.

But even the authors seem to agree that for many capacity HPC applications, like data analysis and visualization, the cloud paradigm offers a lot more flexibility than home-grown set-ups. And this model will be especially advantageous for smaller organizations and groups that have a hard time justifying a datacenter based on peak computing requirements.

A handful of HPC services already exist. Sun’s Grid Compute Utility, IBM’s Computing on Demand and Interactive Supercomputing’s Star-P On-Demand have been available for some time. The MathWorks and Wolfram Research recently incorporated cloud computing support into MATLAB and Mathematica, respectively. And this week, Univa UD launched an HPC virtualization capability that uses Amazon EC2. I expect to see a raft of new HPC cloud offerings in 2009.

Moving back down to Earth, the other potential datacenter killer is the personal supercomputer (PSC), which can inhabit the desktop, deskside or office closet. The current generation of PSCs is largely based on GPUs, which can now provide multi-teraflop acceleration. These machines were much in evidence at SC08, thanks in large part to the introduction of NVIDIA Tesla-equipped systems.

Of course, we’ve seen these personal supers come and go. Just a few years ago, Tyan Computer and Orion Multisystems came out with deskside cluster machines. But these sub-teraflop machines never caught on.

The new crop of GPU-accelerated machines seem more permanent to me. For one thing, they’re more powerful. At 4 teraflops (single precision) they’ve got some serious performance to offer. Plus, with CUDA, OpenCL, and a host of other software that is quickly becoming available from third-party tool makers, it looks like GPU computing has quickly established a new niche in the HPC ecosystem. With big-name players like Cray, Dell and Penguin Computing offering PSCs (with both Linux and Windows environments), there is a much better chance that these machines will endure.

Non-GPU PSCs are possible too. SiCortex already offers its own MIPS-based 72-core desktop system, although it’s mainly positioned as a development machine for the company’s larger clusters. If newcomer Convey Computer decided to shrink-wrap its new FPGA-based “hybrid core” server into a deskside or even desktop system, that could have the makings of a very interesting HPC system for personal use. For those of you who want to stick with vanilla x86 boxes, it will soon be possible to build personal multi-teraflop machines from Intel’s upcoming Intel Nehalem processors. Further down the road, the manycore Larrabee processor — or derivatives thereof — should provide a natural computing engine for desktop teraflopping.

So which model will prevail? Here’s one possible scenario: Desktop, deskside, and office systems will eat away the low and middle end of the market from below, while HPC applications requiring really large-scale parallelism will move into the cloud. For capability supercomputing applications, perhaps clouds will emerge designed specifically for high-end HPC. It’s not too hard to imagine the NSF’s TeraGrid and the European Commission’s DEISA (Distributed European Infrastructure for Supercomputing Applications) supporting cloud services targeted for supercomputing. The U.S. DOE might develop complementary clouds for its user community.

To the extent datacenter issues inhibit HPC adoption, clouds and PSCs will look ever more attractive. I anticipate a lot of experimentation in both areas in the upcoming year.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Hedge Funds (with Supercomputing help) Rank First Among Investors

May 22, 2017

In case you didn’t know, The Quants Run Wall Street Now, or so says a headline in today’s Wall Street Journal. Quant-run hedge funds now control the largest Read more…

By John Russell

IBM, D-Wave Report Quantum Computing Advances

May 18, 2017

IBM said this week it has built and tested a pair of quantum computing processors, including a prototype of a commercial version. That progress follows an an Read more…

By George Leopold

PRACEdays 2017 Wraps Up in Barcelona

May 18, 2017

Barcelona has been absolutely lovely; the weather, the food, the people. I am, sadly, finishing my last day at PRACEdays 2017 with two sessions: an in-depth loo Read more…

By Kim McMahon

HPE Extreme Performance Solutions

Exploring the Three Models of Remote Visualization

The explosion of data and advancement of digital technologies are dramatically changing the way many companies do business. With the help of high performance computing (HPC) solutions and data analytics platforms, manufacturers are developing products faster, healthcare providers are improving patient care, and energy companies are improving planning, exploration, and production. Read more…

US, Europe, Japan Deepen Research Computing Partnership

May 18, 2017

On May 17, 2017, a ceremony was held during the PRACEdays 2017 conference in Barcelona to announce the memorandum of understanding (MOU) between PRACE in Europe Read more…

By Tiffany Trader

NSF, IARPA, and SRC Push into “Semiconductor Synthetic Biology” Computing

May 18, 2017

Research into how biological systems might be fashioned into computational technology has a long history with various DNA-based computing approaches explored. N Read more…

By John Russell

DOE’s HPC4Mfg Leads to Paper Manufacturing Improvement

May 17, 2017

Papermaking ranks third behind only petroleum refining and chemical production in terms of energy consumption. Recently, simulations made possible by the U.S. D Read more…

By John Russell

PRACEdays 2017: The start of a beautiful week in Barcelona

May 17, 2017

Touching down in Barcelona on Saturday afternoon, it was warm, sunny, and oh so Spanish. I was greeted at my hotel with a glass of Cava to sip and treated to a Read more…

By Kim McMahon

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Cray Offers Supercomputing as a Service, Targets Biotechs First

May 16, 2017

Leading supercomputer vendor Cray and datacenter/cloud provider the Markley Group today announced plans to jointly deliver supercomputing as a service. The init Read more…

By John Russell

HPE’s Memory-centric The Machine Coming into View, Opens ARMs to 3rd-party Developers

May 16, 2017

Announced three years ago, HPE’s The Machine is said to be the largest R&D program in the venerable company’s history, one that could be progressing tow Read more…

By Doug Black

What’s Up with Hyperion as It Transitions From IDC?

May 15, 2017

If you’re wondering what’s happening with Hyperion Research – formerly the IDC HPC group – apparently you are not alone, says Steve Conway, now senior V Read more…

By John Russell

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

HPE Launches Servers, Services, and Collaboration at GTC

May 10, 2017

Hewlett Packard Enterprise (HPE) today launched a new liquid cooled GPU-driven Apollo platform based on SGI ICE architecture, a new collaboration with NVIDIA, a Read more…

By John Russell

IBM PowerAI Tools Aim to Ease Deep Learning Data Prep, Shorten Training 

May 10, 2017

A new set of GPU-powered AI software announced by IBM today brings automation to many of the tedious, time consuming and complex aspects of AI project on-rampin Read more…

By Doug Black

Bright Computing 8.0 Adds Azure, Expands Machine Learning Support

May 9, 2017

Bright Computing, long a prominent provider of cluster management tools for HPC, today released version 8.0 of Bright Cluster Manager and Bright OpenStack. The Read more…

By John Russell

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Last week, Google reported that its custom ASIC Tensor Processing Unit (TPU) was 15-30x faster for inferencing workloads than Nvidia's K80 GPU (see our coverage Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

Since our first formal product releases of OSPRay and OpenSWR libraries in 2016, CPU-based Software Defined Visualization (SDVis) has achieved wide-spread adopt Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a ne Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Leading Solution Providers

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which w Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling Read more…

By Steve Campbell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Eng Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular Read more…

By John Russell

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu's Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural networ Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

As China continues to prove its supercomputing mettle via the Top500 list and the forward march of its ambitious plans to stand up an exascale machine by 2020, Read more…

By Tiffany Trader

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of "quantum supremacy," researchers are stretching the limits of today's most advance Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This