HPC Funding Models Need to Encompass More Than Just the Purchase Price

By Andrew Jones

February 8, 2012

Beyond the question of how much funding should be invested in high performance computing resources, it is also important to strive for the optimum funding model: how funding is tied to the service and how it enables and drives user behavior. As it turns out, these models are wrapped up in an IT culture that is often at odds with the way HPC is used.

This article was inspired by recent topical discussions on funding and charging models for HPC in academic institutions in both the UK and the USA, including particularly a series of blog posts by Brock Palen. The issues are by no means limited to academic institutions, and in fact are equally pressing for industry users and providers of HPC resources.

For many of the biggest supercomputers in the world, there is always a separation between the big lump of cash for the machine and funding for the on-going operation, which is often due to different funding routes for each. However, the reality for most HPC systems outside the national supercomputing services, especially academic institutional and industry HPC systems, is that the funding and the service delivery are intricately linked.

To start, let’s constrain this discussion to in-house HPC resources. (I’ll come back to discuss cloud computing and other external models in a future article). The discussion takes in funding, measurement, professional experts, and finally culture changes.

Where does the funding come from?

Beyond the lump sum donation, there are essentially three models for funding HPC resources: through overheads, usage fees, or a combination of these two. I often call this latter model “baseline-plus.” Under the overheads model, the corporate or departmental HPC facility is provided to users as part of the infrastructure of the business or university and is including in the overheads of the business. There may be accounting, i.e., recording usage of the resources by each user, but not charging. Under the usage fees model, accounting leads directly to the users being billed for their actual consumption of resources.

Under the baseline-plus model, some elements of the service, like storage, may be included in the overheads whereas others, like CPU-cycle, may be charged according to consumption. Or the combination may be applied to everything, essentially subsidizing the usage fees by partially including the costs in overheads. Or a combination may be used to provide a service free of charge for “normal” consumption levels but apply charges for extreme usage, such as large storage requirements, large memory jobs, high core-count jobs, and so on.

Finding the model that keeps everyone happy

To see the benefits and issues of each model, let’s start with the viewpoint of the user – always a good place to start. The preferred model for users is almost always going to be “free,” that is, the overheads model. On the face of it, this offers the least burden on the user’s part. They just do whatever science or engineering simulations they need and pay as much attention to the costing of the HPC resources as they do to the company internet connection or the office lighting. However, this can also lead to tension when some users may suspect others of consuming an “unfair” share of the common resource.

At the other extreme, the “charge for everything” model is likely to be least favored by most users, simply because it creates a culture of having to justify every usage of the resource. That may be seen as a good thing by senior management, since the HPC resource is likely to represent a significant investment. However, it might limit the freedom of the researcher or engineer to “just try this,” that is, engage in speculative work that isn’t tied to a clear goal, but which may spark significant innovation.

In theory, the baseline-plus model allows the best of both worlds, enabling speculative work and reduced attention to monitoring consumption, whilst ensuring users who dominate consumption are seen to contribute. However, the potential complexity of the model – what is included in the core service and what is charged at usage – can lead to both confusion and debate amongst users about the “right” way to configure the complexity.

Shifting to the HPC manager’s viewpoint, the instinct is often to prefer the overheads model, as this usually works in practice as a predictable way to budget the resource. The usage fees model is often seen as least desirable because it turns every HPC manager into a sales person trying to keep their customers coming back for more, and involves an uncertain budget.

Allowing for growth and innovation

In all of this discussion so far, we have assumed a static size budget and resource. In reality, the critical aspects of these models is one in which the HPC provision evolves.

Under the overheads model, growing the resources, for example, buying a larger supercomputer or providing more support staff, usually means going back to management with a case to increase the budget. Without a direct link between the end users and the resource provided and consumed, that case is harder to make.

With usage accounting (not necessarily charging) this becomes easier. Changing the balance of the HPC provision, such as providing more large memory nodes, or more cores instead of storage space, is almost impossible because each user will see a different need.

The usage fees model solves this problem. As usage grows, the resource can be increased with the fee income. The type of resource provided can also be changed to meet the needs of users as they direct their fee-paying usage onto different elements of the service.

However, the pure usage fees model creates other problems. What about the resources that the HPC manager knows users will need but which users are resistant to paying directly for? Code performance expertise is a common example of this, as is interconnect bandwidth.

What gets measured is what gets the focus

This leads to another key aspect of the models, namely what to measure (and thus charge for in the fees or baseline-plus models).

The most common unit of consumption to measure is CPU usage. Users are accounted for how many CPU-hours they consume and are charged appropriately. The price of the CPU-hour includes the cost of running the system, but to many users, that’s not fair. For example, why should I pay a high price per CPU-hour when I don’t need that fast interconnect that is driving the price up? Or, I’ve never used the support team, so why can’t I have a discount price? That other user consumes way more memory or disk than me, surely he should pay more? And so on.

It’s temptingly easy to respond by having separate charges for different elements of the service – CPU-hours, support staff, disk usage, high memory nodes, etc. However, this rarely works well in practice for either users or HPC managers.

Measuring CPU-hours alone is horrendously bad practice though. The processors are often the cheapest part of the supercomputer to buy — behind memory and interconnect and maybe disk — and certainly small compared to operating costs, like power and staff salaries.

Idle processor cores are seen as “a bad thing,” but the memory probably cost as much to buy and nearly as much to sit there consuming electricity. Yet few HPC services monitor memory utilization. Or interconnect utilization.

Science and business output, not busy CPUs

And, speaking of utilization, there are competing interests of users and HPC managers. For users, high utilization means more contention for resource, for example, longer job queues, and is a bad thing. For HPC managers, high utilization means demand and provision are, in theory, closely matched, which is an efficient use of budget.

But efficient use of the budget should be tied to the science or engineering outputs achieved, not to the detailed consumption of the resource that enables that innovation. Which is better budget use, a low utilization system that is available on the timescales of the researchers’ or engineers’ needs (and gives them freedom to try out new ideas or support customer requests at short notice) or a high utilization system that means only planned business can be done?

The complexity of matching the model to the needs of the business, both funders and users, means that the strategy for HPC provision is rarely as simple as assumed. That’s why there is a role for professionals who have experience in finding the best model for a given situation. “HPC manager” or the equivalent is a valuable and distinct role within the organization, as is the potential support from independent experts available to provide consulting advice.

The wrong culture?

Perhaps a key part of the answer is that HPC is not really IT. It is built using computer technology but it is really a scientific instrument or engineering facility. I have written about this before. So maybe we need to move away from the funding, measuring and user cultures inherited from traditional IT.

The success of an optical telescope in astronomy might be measured by what new objects are observed, not by the amount of time an eyeball is attached to the end of it. The success of a wind tunnel might be measured by the quality and quantity of the design information gained, not necessarily the amount of time the fan is spinning. It is expected that the instrument or facility will be supported by experts whose profession is the technology of the instrument and that this will be a fundamental part of the funded resource, not an optional extra.

What can you see as cultures from the traditional IT world that are holding back HPC’s potential?

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Read more…

By Doug Black

Microsoft Azure Adds Graphcore’s IPU

November 15, 2019

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft the first large public cloud vendor to offer the IPU designe Read more…

By George Leopold

At SC19: What Is UrgentHPC and Why Is It Needed?

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more. Read more…

By Tiffany Trader

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

NCSA Industry Conference Recap – Part 1

November 13, 2019

Industry Program Director Brendan McGinty welcomed guests to the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). One hundred seventy from 40 organizations attended the invitation-only, two-day event. Read more…

By Elizabeth Leake, STEM-Trek

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project


Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researchers of Europe’s NEXTGenIO project, an initiative funded by the European Commission’s Horizon 2020 program to explore this new... Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour


Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This