It is well known that the term “high performance computing” (HPC) originally describes the use of parallel processing for running advanced application programs efficiently, reliably and quickly. The term applies especially to systems that function above a teraflop or 10^12 floating-point operations per second, and is also often used as a synonym for supercomputing. Technically a supercomputer is a system that performs at or near the currently highest operational rate for computers. To increase systems performance, over time the industry has moved from uni-processor to SMP to distributed-memory clusters, and finally to multicore and manycore chips.
However, for a growing number of users and vendors, HPC today refers not to cores, cycles, or FLOPS but to discovery, efficiency, or time to market. Some years ago, IDC came up with the interpretation of HPC to High Productivity Computing, highlighting the idea that HPC provides a more effective and scalable productivity to customers, and this term fits really well for most commercial customers.
But how did we get here?
A Faster Pace of Innovation
A few years back, UNIX variants such as AIX, HP-UX, Tru64 UNIX, Solaris, etc., ruled – and building supercomputers out of clustering independent, commodity-class machines was a controversial idea as recently as 15 years ago. The historic cost of HPC limited their use to market segments that could afford them. But the evolution of both lower cost hardware and Linux has dramatically reduced the cost of these systems, while computing power has increased on a scale of one thousand times in just a few years. This combination has allowed many companies to use the power of supercomputers in the form of an HPC Linux cluster on commodity hardware.
Time is money
Compared to before, more business are now using computers not only to manage their businesses but also as part of their delivery (animations, stock exchange, analysts, finance, weather forecasting) or to support the creation of products (oil research, car crash tests, drug research). Being able to get this done faster speeds up time to market and making it more accurate gives more buffer. Thus it’s a competitive advantage to run it as fast as possible – which can be done on an HPC system. To give an example on upgrading a cluster, a team recently built a new HPC system for a healthcare client. The new cluster was 10x faster than the old one, half the price, and one quarter the size.
Big Data or HPC?
Another important category is “ultra-scale business computing.” Commercial companies with data-intensive tasks are adopting HPC. For example, just take a look at companies like Google, Amazon, Facebook or eBay; most of these companies don’t have traditional HPC workloads, but at the end of the day these companies use HPC technologies to deal with all the data processing for their “big data” and to run them at extreme scale.
Building Bridges
There is also a growing number of mid-market companies adopting HPC due to changing business needs and the availability of economical solutions. Quite a few organizations in the supercomputing area start building bridges to the SME industry by offering these companies access not only to their supercomputers but also to their own expertise in high performance computing, and co-operations between industry and government/academia are continuously growing – two examples are Pittsburgh Supercomputing Center, and the Irish Centre of High-End Computing.
Linux is your friend
Enterprises that are long-time UNIX users for their HPC business workloads need to understand that they fully can rely on Linux, that Linux did reach UNIX parity and in many segments even surpassed UNIX with regard to availability, scalability and performance.
Windows isn’t the only option
Businesses that are used to using Windows need to “have the heart” to check out alternatives – and dual-boot HPC systems might be a first step. As Linux and Windows seem to become the two dominant platforms of the future in the enterprise, there will be an increasing need for these operating systems and the tools that manage them to work well together. Systems that lack well-developed interoperability capabilities can cause inefficiencies throughout the enterprise.
For example, limited interoperability between Linux and Windows environments, in both physical and virtual instances, can lead to server sprawl. It can also lead to redundant management tools and inefficient use of IT staff. This translates as well for HPC; it seems to be logical that the two major platforms used in the HPC market will be Linux (primarily) and Windows – and they also need to interoperate well in this area.
Thanks to its speedy adoption of technical innovations and improvements – or even better, as Linux very often is “spearheading” technical innovations, Linux will further play a significant role in the new HPC market dynamics, where HPC turns more and more into “High Productivity Computing.” For a non-HPC environment to move to HPC clusters, the speed improvement is somewhere in the range of the node numbers. So moving from a single workstation to a 200 node cluster (about 5 racks) gives you around 200x the performance than before.
As technology continues to evolve, supercomputing will continue to become an essential technology across more industries and within High Productivity Computing. In fact, the U.S. Department of Energy is currently working on developing the ultimate supercomputer to provide exascale (1,000 petaflops per second of sustained performance) for industries that may adopt HPC in coming decade. As open source and Linux continue their evolution, High Productivity Computing will be found in most data-driven industries as the key to success.
About the Author
Meike Chabowski is product marketing manager for Enterprise Linux Servers at SUSE.