Not so long ago, outfitting your datacenter with HPC machinery — cluster or custom super — was the only way to bring respectable high performance computing to your organization. With the rise of desktop HPC systems and supercomputing delivered in the cloud, there are now three viable alternatives. That’s the premise of an article in Linux Magazine by HPC aficionado Douglas Eadline.
What’s driving the desktop market are the increasing core counts available on processors and the inability of many HPC apps to scale above 32 cores. Since the latest CPU silicon from both Intel and AMD delivers 8 cores per processor — up to 12 in the case of the Magny-Cours Opteron — you just need a 4-socket box to reach the 32-core threshold. Eadline found that 40 percent of the end users he surveyed would use a 48-core desktop machine to run all their workloads, and 25 percent would use such a machine in conjunction with a cluster. As core counts rise, those percentages are likely to go up, leaving clusters as the go-to platform only for applications that can scale beyond the confines of a desktop setup.
And that same desktop machine could also double as a portal to an HPC cloud, says Eadline. This delivery model would be especially useful when you exhaust local capacity (either desktop or cluster) or when you didn’t have any local HPC infrastructure to begin with. In any case, the HPC delivery model is certainly getting more interesting.