Linux Networx is Aiming High

By Nicole Hemsoth

April 14, 2006

Robert (Bo) Ewald, CEO Linux NetworxEncouraged by the robust growth of Linux Networx last year, CEO Robert (Bo) Ewald is expressing a great deal of confidence about the company's prospects. With the establishment of Linux clusters as the model for commoditizing high performance computing, Ewald sees Linux Networx perfectly positioned to drive this technology into high-end supercomputers.

In February, he declared Linux Networx would be “the next great supercomputing company.” In that same month, the announcement of the purchase of five Linux Networx supercomputers by the DoD has helped to support his vision of the company as a serious player in the high-end supercomputing market. HPCwire recently spoke with Ewald about the direction Linux Networx is taking and his perception of the evolving HPC market.

HPCwire: It's almost undeniable that the HPC market is undergoing a rapid change right now. What do you see driving this change and how is Linux Networx adapting?

Ewald: There is no question that the market is undergoing rapid change, in fact, we are helping drive and make some of that change! From the user's and customer perspective, they basically want to know:

   Can the system solve my problem?
   At what price and with what performance?
   Does it have the features that I need and how hard is it to use?
   What is the support model going forward?
   What is the total cost of ownership?

So, the revolution that we are participating in is one of using commodity components, coupled with open software, adding just the right amount of intellectual property for price/performance, features and ease of use; and then integrating all of that into a system and standing behind it for the life of the system. That is exactly where we are going with our company, so rather than adapting, I think of us as helping to drive.

HPCwire: Right now there seems to be a high level of turnover of executive-level personnel in HPC companies. Even Linux Networx has experienced this. From your perspective, what do you think is causing this?

Ewald: I won't comment about other companies, but am pleased to say that in our case our business is growing very rapidly — 40 to 50 percent per year compounded over the past few years — and it looks like it might grow even faster this year. So we are expanding our management team while also adding more experienced members to our team.

HPCwire: As the new SGI CEO, Dennis McKenna has recently outlined a different business strategy for the company that puts more emphasis on the mid-range systems and enterprise market. Also, it appears they will develop visualization systems that use industry-standard and open source components, rather than proprietary technology. This approach has some similarities to the Linux Networx' model. What do you think?

Ewald: I've read of Mr. McKenna's strategy, but have not yet talked with him about it, and again won't comment on what others are doing. But, our technology strategy is clear, as I've already outlined, with one addition — we are focused on the technical market place, not the “enterprise.” We believe that there are unique needs in the scientific, engineering, technical, and national defense markets that a company needs to focus on to deliver the best results at the best price/performance to the end user. In fact, for the next few years you'll see us fastidiously avoiding going into the broader “enterprise” market. We'll be happy being the best in the world in our part of the technical computing market.

HPCwire: Speaking of visualization — high-end visualization has become more accessible to users within the last few years. Can you talk about some of the factors that have contributed to this and what the future of this market looks like?

Ewald: In the beginning, the processors of the day were simply not fast enough — or were too expensive — for today's high end visualization. So, company's like SGI pioneered proprietary systems in which special hardware and software helped achieve the performance needed to display highly complex images. Again, as technology has marched on, today's supercomputing clusters commodity processors and much higher speed interconnects can bring more compute power to bear on the same problems at a fraction of the cost. In the very near future you will see Linux Networx use our LS series systems as a foundation for high-end visualization, to which we add high performance graphics pipes and visualization software to create a new family of high-end visualization systems. We believe that we'll be able to deliver much better price/performance than with existing proprietary systems, and as a proof point, one of our large customers has ordered a 64-node visualization system with 128 graphics pipes from us.

HPCwire: Several weeks ago, IDC came out with a report that confirmed that the high-end capability and enterprise segments — which it defines as HPC systems over $1 million dollars — declined in 2005. As the price/performance of today's $1 million dollar HPC system improves, will the market for these kinds of systems start to grow? To be clear, I'm not asking if the $1 million system market will grow, I'm asking if the 100 teraflop system market will grow as prices reduce? Or do other things have to happen, such as an increased demand for capability applications?

Ewald: Even though the high end of the market has been flat to slightly declining, I believe that there has been an increase in both the problem size and the amount of work getting done. This in spite of the fact that during the past decade there have been two big economic disruptions to this market — as defense spending went down in the mid-1990s and as the recession hit in the early 2000's — both government and industrial customers either downsized what they spend, or postponed spending.

The reality of today's market is that you can get the same amount of computing done with one of our $1M systems that you could with a $10M proprietary system a few years ago. So, in aggregate, with the high end of the market staying relatively constant in total dollars invested, people are actually getting a lot more compute cycles. And, because of that same price/performance dynamic, customers who were buying $1M systems a few years ago, can now get the same amount of computing done for $100K, so they are in another price band. And while their problem size has grown, it hasn't grown by a factor of 10 during the same period.

As an example, in the early 1980s, ARCO was the first oil company to use a Cray system to model oil reservoirs on the North Slope of Alaska oil fields. After ARCO's success, within about three or four years, all major oil companies were using systems of that class. While the problem size has grown since then, those simulations have now migrated to supercomputing clusters at lower price points because the price/performance improvements have outpaced the model growth. Over the next few years as there is more economic incentive to find, retrieve, or obtain petroleum from new sources, tar sand for example, the model complexity and economic drivers may signal a reverse and cause this segment to begin moving up the price points again. Other industries and applications are moving up, down or staying at the same price points depending on the mixture of price/performance improvements, model size changes, and economic or national benefit.

Having said that, given the twenty years that I've been involved in this industry, it is phenomenal to me to see what people are able to model on their desktop, or on a departmental system today – those models could only be run on a $10M system 10-20 years ago. That “trickle down” is something that we owe those pioneers who did the first work at the high end years ago. At the other end of the market, both government and industry have huge problems to be solved that far exceed today's computing capabilities. Imagine the benefit of being able to model a human lung, or being able to more accurately predict the path of severe storms, or model all of the aerodynamic effects for a complete aircraft fuselage with engines. As the computational models evolve, as the price/performance improvements continue, and as a compelling economic advantage or a nationally or globally important problem is solved; industry and government will indeed buy the most powerful system. And then, at some point we'll see that model of a lung being able to run on a deskside system, while the largest systems model . . .

HPCwire: As the industry reaches towards petascale computing in the next five years, will Linux Networx have a role to play? And what are some challenges that must be overcome to design and build a commercial petascale machine that would be broadly accessible to industry, that is, one that could be sold for less than a $1 million?

Ewald: We call our leading edge systems Custom Supersystems since they are usually designed to meet particular customer requirements for extreme performance and scalability. In fact it was the MCR system that the company delivered to LLNL and the several large systems that we've delivered to LANL that helped establish Linux Networx and demonstrated the viability for standards-based technology for very high-end systems. In the same spirit, we are working on new technology to help address petaflop-scale computing that we believe will help us get there more quickly and at a lower cost than other publicized approaches. We do expect that this technology will eventually reach a point that will enable a commercially viable petascale system, but it probably won't cost $1M at the start!

HPCwire: Can you tell us a little bit about Linux Networx' plans for the remainder of this year? New systems, major upgrades, or strategic partnerships?

Ewald: We can certainly give you some hints! You'll see us continue our march to provide both our very high end systems — the Custom Supersystems that we've delivered to customers like Los Alamos National Laboratory and the Army Research Laboratory. And in the first quarter we began shipping our Supersystem families, the LS-1 and LS/X, which have more standardized configurations and are primarily being ordered by industrial customers. A few weeks ago we announced our three sets of storage offerings and you'll see announcements from us every quarter this year with new products targeted at visualization, high performance application performance and systems that are better tailored and tuned for particular applications. Also, as you know, we are very strong in certain vertical industries — national laboratories, aerospace, automotive and manufacturing. As we move through the year, you'll also see announcements from us about the two additional industries that we are beginning to serve — energy and national defense. So, it's going to be a very busy and very exciting year for us!

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

EuroHPC Expands: United Kingdom Joins as 35th Member

May 14, 2024

The United Kingdom has officially joined the EuroHPC Joint Undertaking, becoming the 35th member state. This was confirmed after the 38th Governing Board meeting, and it's set to enhance Europe's supercomputing capabilit Read more…

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Software Foundation (HPSF). The announcement was made at the ISC Read more…

Nvidia Showcases Work with Quantum Centers at ISC24

May 13, 2024

With quantum computing surging in Europe, Nvidia took advantage of ISC24 to showcase its efforts working with quantum development centers. Currently, Nvidia GPUs are dominant inside classical systems used for quantum sim Read more…

ISC24: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger systems (e.g. exascale), according to Hyperion Research’s ann Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Oak Ridge National Laboratory in Tennessee, USA, retains its Read more…

Harvard/Google Use AI to Help Produce Astonishing 3D Map of Brain Tissue

May 10, 2024

Although LLMs are getting all the notice lately, AI techniques of many varieties are being infused throughout science. For example, Harvard researchers, Google, and colleagues published a 3D map in Science this week that Read more…

Shutterstock 493860193

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Softw Read more…

ISC24: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger sys Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of Read more…

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. Accordin Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hop Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Leading Solution Providers

Contributors

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire