To Build or to Buy Time: That is the Question

By Nicole Hemsoth

August 11, 2010

Generally, when one thinks about the vast array of small to medium-sized businesses deploying a cloud to handle peak loads or even mission-critical operations, the idea that such a business might be designing the future of missile defense strategy isn’t the first thing that comes to mind. After all, SMB concerns have historically not had much in common with those of large-scale enterprise and HPC users. The cloud is creating a convergence of these spaces and smaller businesses that were once unable to gain a foothold in their market due to high infrastructure start-up costs are now a competitive force due to the availablity of shared or rented infrastructure and a virtualized environment. This convergence creates new possibilites but can complicate end user decision-making about ideal options for mission-critical workloads.

Analytical Services, Inc. (ASI), a U.S. Department of Defense Missile Defense Agency subcontractor recently used Sabalcore’s high performance computing (HPC) on-demand services to design aerospike nozzles for use in missile systems. These developments in aerospikes represent a significant improvement from a design perspective but required enormous compute power to bring them to market. Orlando, Florida-based Sabalcore, a relatively small company, was able to provide the Linux cluster required for the task while allowing ASI to eliminate the overhead of investing in their own hardware to meet the design challenges.

According to Joseph D. Sims, Technical Director of Engineering at ASI, “Computational fluid dynamics (CFD) is critical to our design efforts, which means we cannot complete that design without Sabalcore’s Linux cluster. We, like many small businesses, cannot afford the luxury of buying and maintaining our own.” Sims went on to note that as with other design projects requiring high levels of compute power, ASI’s goals meshed well with the Linux clusters on-demand because “we could not hope to support our design efforts with CFD running on a serial computer (e.g., a desktop or workstation).” ASI’s Technical Director stated that following comparisons of buying and maintaining a cluster versus buying the access to the Linux cluster, there was “a huge cost savings” that could be realized.

Dividing Line on Building Versus Buying Time?

Gauging from conversations with vendors and end users alike, it is this investment avoidance, coupled with the on-demand nature that makes HPC on-demand services like those offered by Sabalcore and a handful of others (Cycle, Penguin, rSystems, SGI, etc.) attractive. This, along with the fact that HPC on-demand providers tout their high level of personalized support makes this an attractive option—sometimes more attractive than a public cloud.

One has to wonder where the dividing line is for those making decisions about buying versus renting time via an on-demand service—all coupled with the added possibility of the cloud. For some it is about price, for others, it’s rooted in performance goals, for others security. There are no hard and fast rules of thumb for end users but it might seem more attractive to take someone else’s cluster for a guided spin versus tweak applications to suit a cloud that might not yet have proved itself as a viable option.

So where does the cloud fall short when it’s decision time for end users to make the crucial build or buy decision in a case like ASI’s? In an email interview, co-founder of Sabalcore, John Van Workum was asked if there was any tension or cause for competitive concern between HPC on demand services like his company’s and a service like the newly-announced Cluster Compute Instances from Amazon, which are aimed at the same market—those who require HPC-like capacity to run complex or particularly resource-hungry applications. Van Workum stated:

Providers like Amazon have the advantage when it comes to sheer size. They have vast web, storage, and compute resources that a user can tap into. But, HPC boils down to performance. How fast will my application run and how much will it cost are the two biggest questions. It will be interesting to see if Amazon’s new HPC instances will be popular with the HPC user base community.

Because of Amazon’s virtualization layers, the end user is not getting near 100% of the bare-metal performance from a server. Their upgraded 10GigE network for the  HPC instances is an improvement over previous offerings, but DDR and QDR InfiniBand are proven faster. Also, I believe Amazon has restrictions in place when it comes to the number of cores an HPC instance can have at any given time.  Sabalcore, on the other-hand, has a purpose built HPC systems with very few restrictions. Of course, customer service and technical support sets us apart from large HPC cloud providers.

HPC On-Demand Versus an HPC Cloud

ASI like many other small to mid-sized enterprises who have occasional spikes in need for HPC resources are faced with the decision between building or buying time. Performing a careful cost analysis of such a decision is difficult and fraught with uncertainty for new users when there is a cloud option available to contend with as well. However, the problem is that many HPC on-demand companies like Sabalcore are taking the cloud approach with their marketing message and might be adding to confusion by muddling the concept of what a cloud is—and is not.

In fact, the very term “cloud” is problematic for a company like Sabalcore since what they’re providing is not really a cloud at all. While they certainly recognize this, companies with essentially the same offerings are putting the word “cloud” on HPC on-demand services, which adds to confusion, especially for new users who are far more concerned with keeping with their research and time-to-market goals than arguing over complex, hotly-debated definitions. In Van Workum’s view;

Cloud is such a broad term and it’s definition has been discussed in detail and I don’t believe it has one, all encompassing, definition.

One could consider us cloud simply because we host services on the internet. But it pretty much ends there. HPC has very little to do with web-based desktop tools, virtual storage, virtual servers, cloud files, and nebulous virtual  environments which are synonymous with “cloud” these days. We are none of those things either. So therefore we avoid using the term “cloud” when describing Sabalcore.

With this in mind, Workum also provided some commentary on those who are offering the same HPC on-demand service and how a company can differentiate itself in the face of new cloud offerings and competitors. While his detailed response is below, it should be noted that he hits on exactly the same core themes that have emerged in recent conversations with companies like Penguin about its P.O.D service, rSystems, and a host of others. On Sabalcore and the landscape for HPC on-demand companies Workum noted:

HPC users that are familiar with traditional Linux cluster environments will find our environment very similar. We have a very low learning curve. The end user is not hassled by managing instances, insufficient web interfaces, or third party products. Often, a customer is running their job in a matter of hours after logging in for the first time.

Not every application fits nicely into an HPC environment. We provide each new customer with adequate evaluation time and hand holding assistance should they require it.

Our engineers have experience working with hundreds of different applications and can usually make the required modifications in a matter of hours. It is important to note that we almost always adjust the customer’s computing environment in such a way that the changes are as transparent as possible to the customer. It is very uncommon for us to require that the customer make more than superficial changes to their applications or data. But when that does occur, we have the experience to either do it for them or to guide them with the modifications.

Experience and exceptional technical and customer support define us. Sabalcore is a 100% HPC as a service provider and has been since its inception in 2000. We focus solely on our service rather than also selling hardware unlike some recent HPC cloud participants.

In his line of thinking, the cloud is hindered by its lack of support, which is part of the reason why some companies opt for HPC on-demand services versus a public cloud like Amazon’s EC2—even with its new HPC-geared instance type.

Sabalcore has experienced solid growth in the last four years, in part because it has been able to appeal to those who rejected the cloud as an option and who have certainly rejected the option of investing in their own clusters for more obvious reasons. As the cloud, especially public cloud offerings, are developed to be more in tune with the needs of companies like ASI, however, the cloud might push HPC on-demand providers to emphasize even more fervently the support and personalization aspects that go hand-in-hand with their alternative.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

CMU’s Latest “Card Shark” – Libratus – is Beating the Poker Pros (Again)

January 20, 2017

It’s starting to look like Carnegie Mellon University has a gambling problem – can’t stay away from the poker table. Read more…

By John Russell

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Jan. 19, 2017)

January 19, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

HPE Extreme Performance Solutions

Remote Visualization: An Integral Technology for Upstream Oil & Gas

As the exploration and production (E&P) of natural resources evolves into an even more complex and vital task, visualization technology has become integral for the upstream oil and gas industry. Read more…

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Women Coders from Russia, Italy, and Poland Top Study

January 17, 2017

According to a study posted on HackerRank today the best women coders as judged by performance on HackerRank challenges come from Russia, Italy, and Poland. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

Weekly Twitter Roundup (Jan. 12, 2017)

January 12, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

A Conversation with Women in HPC Director Toni Collis

January 6, 2017

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Leading Solution Providers

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This