July 30, 2013

Linux Foundation Maps Top500 Run

Nicole Hemsoth

This week the Linux Foundation released a special report about its role over the last twenty years of the Top500, pointing to some significant trends that helped it switch places with Unix (which once had a 96% share of the list) in roughly a one-decade timeframe. For instance, there are a few....

Not long before the first incarnation of the Top500 list of supercomputers, Linux was officially born. It would eventually, just a tick over twenty years later, be the operating system of choice for a rather staggering 96 percent share of the supercomputing world’s list of top systems. 

On this twentieth year of the Top500, all of the top ten systems, not to mention another 466 of the others, are Linux powered. This hasn’t always been the case, of course. While it took nearly a decade for the shift to happen, Unix went from a 96% share of the list to a complete inverse with Linux taking approximately the same numbers as of June’s list. 

Further, if you take a look at the evolution of RMax, it has grown steadily since the list debuted in 1993 on its own. But when viewed in the context of Linux’s major entry into a dominant position, by 2004 it already had half the machines on the list with a steady rise from that point on.

As the Linux Foundation summarized, “by isolating RMax by operating system using the past 20 years of Top500 data, it’s clear that Linux is not only responsible for supporting the majority of supercomputers today, it’s a driving force behind the disproportionate growth in supercomputing capacity over the past decade.” Certainly, there are many other elements that are critical to the RMax growth, but Linux’s role is in permitting the flexibility to adapt to novel or exotic architectures, says the Foundation.

When it comes to Linux’s success, both in HPC and in the general market, it’s about “price” (self-supported) but also a matter of flexibility, so the Linux Foundation reminds. And for that Top500 set, an explosion of new architectures that went way beyond just adding more processors required both of these, especially if one looked at pricing at a per-core level. 

As the Linux Foundation noted in a report released today that reflects on its twenty years of growth in the Top500, the real OS sea change happened as “system architectures became a whole lot more complex after 1996, [especially] when Intel’s ASCI Red machine first broke the teraflop barrier.”

Following that point, a newer evolution of Top500-class systems emerged that didn’t reflect the homogenous tradition–these were custom heterogeneous systems, which required new levels of flexibility. Accordingly, Linux began its rise to dominance. Since most of the cream of the Top500 crop were purpose-built for specialized functions, they required new levels of freedom to tweak around the problem since it wouldn’t be realistic to chase down a software vendor to create a custom OS for one specific problem set or system. 

As a side note, even if one were to contract a company to build a unique, custom OS for a particular supercomputer, chances are they’d be beholden to the same per-node pricing that’s so common. Linux, if self-supported, is the same license model despite the node count, although this is certainly not the case with a supported OS. This model, which shed concerns over pricing based on node count, was certainly a factor during Linux’s rise to prominence. It also continues to light a fire under the…tushes of system vendors who are selling supported distros.

This is not to suggest that the Top500 is comprised of sheer open source Linux. For instance, SUSE Linux sent out a release today at the same time as the report noting that they claim a 1/3 share of the Top 100 systems while there are a number of other custom tints to Linux on the list as well. 

The Foundation has some noteworthy graphics highlighting a few other elements in terms of OS growth in the Top500 in a document that was released today.  


Share This