Encouraged by the robust growth of Linux Networx last year, CEO Robert (Bo) Ewald is expressing a great deal of confidence about the company's prospects. With the establishment of Linux clusters as the model for commoditizing high performance computing, Ewald sees Linux Networx perfectly positioned to drive this technology into high-end supercomputers.
In February, he declared Linux Networx would be “the next great supercomputing company.” In that same month, the announcement of the purchase of five Linux Networx supercomputers by the DoD has helped to support his vision of the company as a serious player in the high-end supercomputing market. HPCwire recently spoke with Ewald about the direction Linux Networx is taking and his perception of the evolving HPC market.
HPCwire: It's almost undeniable that the HPC market is undergoing a rapid change right now. What do you see driving this change and how is Linux Networx adapting?
Ewald: There is no question that the market is undergoing rapid change, in fact, we are helping drive and make some of that change! From the user's and customer perspective, they basically want to know:
Can the system solve my problem?
At what price and with what performance?
Does it have the features that I need and how hard is it to use?
What is the support model going forward?
What is the total cost of ownership?
So, the revolution that we are participating in is one of using commodity components, coupled with open software, adding just the right amount of intellectual property for price/performance, features and ease of use; and then integrating all of that into a system and standing behind it for the life of the system. That is exactly where we are going with our company, so rather than adapting, I think of us as helping to drive.
HPCwire: Right now there seems to be a high level of turnover of executive-level personnel in HPC companies. Even Linux Networx has experienced this. From your perspective, what do you think is causing this?
Ewald: I won't comment about other companies, but am pleased to say that in our case our business is growing very rapidly — 40 to 50 percent per year compounded over the past few years — and it looks like it might grow even faster this year. So we are expanding our management team while also adding more experienced members to our team.
HPCwire: As the new SGI CEO, Dennis McKenna has recently outlined a different business strategy for the company that puts more emphasis on the mid-range systems and enterprise market. Also, it appears they will develop visualization systems that use industry-standard and open source components, rather than proprietary technology. This approach has some similarities to the Linux Networx' model. What do you think?
Ewald: I've read of Mr. McKenna's strategy, but have not yet talked with him about it, and again won't comment on what others are doing. But, our technology strategy is clear, as I've already outlined, with one addition — we are focused on the technical market place, not the “enterprise.” We believe that there are unique needs in the scientific, engineering, technical, and national defense markets that a company needs to focus on to deliver the best results at the best price/performance to the end user. In fact, for the next few years you'll see us fastidiously avoiding going into the broader “enterprise” market. We'll be happy being the best in the world in our part of the technical computing market.
HPCwire: Speaking of visualization — high-end visualization has become more accessible to users within the last few years. Can you talk about some of the factors that have contributed to this and what the future of this market looks like?
Ewald: In the beginning, the processors of the day were simply not fast enough — or were too expensive — for today's high end visualization. So, company's like SGI pioneered proprietary systems in which special hardware and software helped achieve the performance needed to display highly complex images. Again, as technology has marched on, today's supercomputing clusters commodity processors and much higher speed interconnects can bring more compute power to bear on the same problems at a fraction of the cost. In the very near future you will see Linux Networx use our LS series systems as a foundation for high-end visualization, to which we add high performance graphics pipes and visualization software to create a new family of high-end visualization systems. We believe that we'll be able to deliver much better price/performance than with existing proprietary systems, and as a proof point, one of our large customers has ordered a 64-node visualization system with 128 graphics pipes from us.
HPCwire: Several weeks ago, IDC came out with a report that confirmed that the high-end capability and enterprise segments — which it defines as HPC systems over $1 million dollars — declined in 2005. As the price/performance of today's $1 million dollar HPC system improves, will the market for these kinds of systems start to grow? To be clear, I'm not asking if the $1 million system market will grow, I'm asking if the 100 teraflop system market will grow as prices reduce? Or do other things have to happen, such as an increased demand for capability applications?
Ewald: Even though the high end of the market has been flat to slightly declining, I believe that there has been an increase in both the problem size and the amount of work getting done. This in spite of the fact that during the past decade there have been two big economic disruptions to this market — as defense spending went down in the mid-1990s and as the recession hit in the early 2000's — both government and industrial customers either downsized what they spend, or postponed spending.
The reality of today's market is that you can get the same amount of computing done with one of our $1M systems that you could with a $10M proprietary system a few years ago. So, in aggregate, with the high end of the market staying relatively constant in total dollars invested, people are actually getting a lot more compute cycles. And, because of that same price/performance dynamic, customers who were buying $1M systems a few years ago, can now get the same amount of computing done for $100K, so they are in another price band. And while their problem size has grown, it hasn't grown by a factor of 10 during the same period.
As an example, in the early 1980s, ARCO was the first oil company to use a Cray system to model oil reservoirs on the North Slope of Alaska oil fields. After ARCO's success, within about three or four years, all major oil companies were using systems of that class. While the problem size has grown since then, those simulations have now migrated to supercomputing clusters at lower price points because the price/performance improvements have outpaced the model growth. Over the next few years as there is more economic incentive to find, retrieve, or obtain petroleum from new sources, tar sand for example, the model complexity and economic drivers may signal a reverse and cause this segment to begin moving up the price points again. Other industries and applications are moving up, down or staying at the same price points depending on the mixture of price/performance improvements, model size changes, and economic or national benefit.
Having said that, given the twenty years that I've been involved in this industry, it is phenomenal to me to see what people are able to model on their desktop, or on a departmental system today – those models could only be run on a $10M system 10-20 years ago. That “trickle down” is something that we owe those pioneers who did the first work at the high end years ago. At the other end of the market, both government and industry have huge problems to be solved that far exceed today's computing capabilities. Imagine the benefit of being able to model a human lung, or being able to more accurately predict the path of severe storms, or model all of the aerodynamic effects for a complete aircraft fuselage with engines. As the computational models evolve, as the price/performance improvements continue, and as a compelling economic advantage or a nationally or globally important problem is solved; industry and government will indeed buy the most powerful system. And then, at some point we'll see that model of a lung being able to run on a deskside system, while the largest systems model . . .
HPCwire: As the industry reaches towards petascale computing in the next five years, will Linux Networx have a role to play? And what are some challenges that must be overcome to design and build a commercial petascale machine that would be broadly accessible to industry, that is, one that could be sold for less than a $1 million?
Ewald: We call our leading edge systems Custom Supersystems since they are usually designed to meet particular customer requirements for extreme performance and scalability. In fact it was the MCR system that the company delivered to LLNL and the several large systems that we've delivered to LANL that helped establish Linux Networx and demonstrated the viability for standards-based technology for very high-end systems. In the same spirit, we are working on new technology to help address petaflop-scale computing that we believe will help us get there more quickly and at a lower cost than other publicized approaches. We do expect that this technology will eventually reach a point that will enable a commercially viable petascale system, but it probably won't cost $1M at the start!
HPCwire: Can you tell us a little bit about Linux Networx' plans for the remainder of this year? New systems, major upgrades, or strategic partnerships?
Ewald: We can certainly give you some hints! You'll see us continue our march to provide both our very high end systems — the Custom Supersystems that we've delivered to customers like Los Alamos National Laboratory and the Army Research Laboratory. And in the first quarter we began shipping our Supersystem families, the LS-1 and LS/X, which have more standardized configurations and are primarily being ordered by industrial customers. A few weeks ago we announced our three sets of storage offerings and you'll see announcements from us every quarter this year with new products targeted at visualization, high performance application performance and systems that are better tailored and tuned for particular applications. Also, as you know, we are very strong in certain vertical industries — national laboratories, aerospace, automotive and manufacturing. As we move through the year, you'll also see announcements from us about the two additional industries that we are beginning to serve — energy and national defense. So, it's going to be a very busy and very exciting year for us!