A reader takes issue with some of the comments made last week by the self-dubbed High-End Agnostic. A debate ensues regarding the economics of high-performance computing, and how it may affect its future.
Readers can read the initial piece here [M380019].
The letters are below:
Tim,
The High-End Agnostic makes a good argument, though I disagree with him. I think a better pseudonym would be High-End Fatalist: fate is immutable; if better computational models, programming languages, architectures, and software engineering methodologies were possible, we'd already have them.
HEA wonders why “market failures” have led to stasis. That's easy. We're stuck in the cluster/MPI rut. Users buy what vendors sell; vendors sell what users buy. Neither users nor vendors can afford cost and tolerate risk of radical change of the kind HEC advocates. If some novel model/language/architecture/methodology will provide breakthrough speed up, incremental improvements in current technology will not find it. Financial failures preclude venture capital; current vendors dare not abandon their installed base. HEC's solution to this conundrum is government support for supercomputing R&D.
HEC is obviously a supercomputer user, probably from the national security community, not someone who wants to build better mousetraps. Repeatedly, HEC argues that current big iron is NOT meeting his needs and does NOT have the luxury of tailoring his computation to available machines. It's impossible to know, much less espouse, what major societal impacts will arise from breakthrough supercomputing. Ab initio modeling of large-molecule drug interactions seems promising; cures for cancers with minimal side effects seems major to me.
HEA has the luxury of tailoring his computations to available hardware; HEC does not. That's all.
–Brian Larson, Chairman and founder of Multitude Corporation
Here, the High-End Agnostic responds:
I think that Brian does not understand what “market failure” means. If costs and benefits are accurately reflected by offer and demand curves then (classical economists will say) the result achieved in a free market is an optimal one: If the market leads to the use of clusters and the disappearance of supercomputers, then this is the optimal result. To justify an intervention in the free market, one needs to argue that there is a “market failure”, so that costs or benefits are not reflected in the offer and demand curves; e.g., because of externalities (such as pollution — negative externality, or scientific progress — positive externality: nobody pays for pollution and nobody can directly benefit from investment in long term research).
It is not sufficient to argue that clusters and MPI are yucky or do not satisfy the needs of some customers. One needs to argue why the marketplace, left to its own device, does not satisfy an existing need: Why, if supercomputers are so beneficial, are customers not willing to pay what it takes to have them?
— High-End Agnostic
Mr. Larson counters again:
HEA frames the question perfectly: “Why, if supercomputers are so beneficial, are customers not willing to pay what it takes to have them?”
The cost/benefit ratio of current supercomputers is so high that only those with deep pockets and crucial need buy them. A significant fraction of the current capability market is supported by taxpayers. Very few businesses use supercomputing significantly if at all. Supercomputing startups have gone bankrupt leaving early-adopting customers unsupported. I agree that “market failure” well describes the current state of high-end computing, and I believe HEC would concur.
HEC argues national security needs much better supercomputing; industry won't support supercomputing R&D; therefore, government should support what industry will not. Market failure is a fundamental premise in HEC's argument for government sponsored revitalization.
Suppose there exists some configuration of silicon and wire that could devote a significantly-greater fraction of hardware to computation and scale without bound. If government-sponsored R&D significantly improved supercomputing cost/benefit ratios and broadened the range of applications that get good speed up, then businesses will buy supercomputing. Until then, business will make do with SAP runs that can only occur overnight or weekends, logic synthesis tools that take hours, or otherwise wait.
HEC believes such a configuration exists and that government supported supercomputing R&D might find it. Does HEA believe that no better computational models, architectures, programming languages, or software engineering methodologies exist and that government money will be wasted if it tries?
There are indeed many questions, with a multitude of complicated answers and solutions. In light of this, HPCwire wants to hear what YOU think. Please email your comments to editor Tim Curns at [email protected].