Walter Stewart on SGI’s Role in Ever-Evolving World of Grid

By By Derrick Harris, Editor

March 14, 2005

A lot has changed since 1997, when SGI put on the first public Grid demonstration at the Supercomputing show. Walter Stewart, SGI's business development manager for Grid, spoke with GRIDtoday editor Derrick Harris about just what has changed since then, as well about the company's tactics in the battle against increasingly large data spikes.

GRIDtoday: How do the new products mentioned in SGI's current release [the Altix 1350, Altix Hybrid Cluster, InfiniteStorage Total Performance 9700 and Silicon Graphics Prism] advance SGI's Grid strategy?

WALTER STEWART: We have, for some considerable time, ensured that any new product that we bring out is Grid-enabled. We have been looking to bring out products that bring a unique functionality to Grids, and we believe that these four products advance the kinds of functionalities that SGI is able to make available to people who are operating Grids. Particularly in the case of the 1350, we are bringing in functionality at a much more attractive price point than has been available before.

I think that speaks to SGI's overall Grid position. We're out there to be a toolmaker for the Grid and to make sure that the kind of power SGI brings to stand-alone compute facilities is available to hugely distributed users.

Gt: Do you know of any projects off-hand that are using or planning on using any of these new solutions?

STEWART: Because we've only just released them, I'm not aware of any that are looking at them for Grid at the moment. Certainly, some products from the Altix family have already been installed in major Grid installations around the world. We've had circumstances where we've had customers who are not interested in large, shared-memory machines, but who are interested in what might be described as “robust node clusters” for their Grids, and that's precisely what the Altix 1350 addresses. We are certainly very much involved with the Altix family in a number of major Grid installations around the world.

Gt: Which leads to my next question. There are certainly some well-established projects currently powered by SGI solutions, including the TeraGyroid, COSMOS and SARA projects. Could you talk a little about how SGI products are being used in these, and other, Grid projects?

STEWART: One interesting one, in your own country, is that we installed at the beginning of the year a 1,000-plus processor machine at NCSA, which will be one of the resources on the TeraGrid. This is, as far as I'm aware, the first shared-memory resource that has been available to TeraGrid users. So that's one very recent, and North American example.

I think I'm right in saying that it's next month that we're installing another 1,600-plus processor machine at the Australian Partnership for Advanced Computing, which will become a major resource on the Australian Grid.

COSMOS Grid has been around for some time, and we've been through a couple of generations with COSMOS Grid. We first installed Origin there, and have subsequently installed Altix. This is all because the COSMOS Grid people are in the business of setting up the data environment, including processing and visualization, in order to be ready for the data that will come flooding in from the Planck satellite in 2007. This is an example of bringing real power to Grids with a very strong emphasis on a very close connection among compute, visualization and data management.

Gt: SGI is focused on addressing four primary challenges of Grid, and I want to talk about two in particular. First: Why is security such a big issue in Grid computing, what are some of the major security issues and what is SGI doing to improve Grid security?

STEWART: We've been doing a lot of work with the open source community in transferring a lot of IP from our experience with our own operating system, IRIX. There are some security issues that we're hopeful will be picked up by the open source community. Because a lot of our security work with IRIX was right in the OS, we feel obliged to work with the open source community, and move at their speed, on the introduction of some of those attributes.

One area that isn't talked about in security, or security-related issues, that SGI is very preoccupied with is the whole issue of versioning. It's one thing to talk about the security of data, it's another thing to talk about the integrity of data. With our CXFS storage-area network over a wide-area network on the Grid, we are solving the problem of making multiple copies of data. So if you're in San Diego and I'm in Toronto, we could be working on the same data set without having to make a copy for each of us. Therefore, we can be confident that the data I'm working with is the same as you are because we're using the same copy.

We also keep a watchful eye on the work that goes on in the standards bodies like GGF.

Gt: I haven't heard a lot of companies state their dedication to the cause of visualization capabilities on Grid networks. Can you give me a little more detail on why this is so important to SGI?

STEWART: It's important to SGI because we think it's important to the world of Grid and the world of next-generation computing. Let me give you an example. We work with an engineering firm that was working in a fairly conventional IT environment where they had a number of workstations around the company, in a few locations, and they were copying files from workstation to workstation as different people had to work on them. Those files were in the neighborhood of 200GB, and it was taking about three hours on their network to move the files. But then they came to us and said that they were going to have a problem because their next data set was going to be a terabyte in size, and that was probably going to take something in the order of 22 hours to move — that was not going to be acceptable. We said, “Well, we wouldn't worry about it anyway if we were you.”

And they said, “Why is that?”

“Because you can't load it on the workstation anyway. It'll crash it.”

Before we presented the solution to them, they came back and said they had made a mistake. It was, in fact, not going to be 1TB; it was going to be 4TB. I might say this as an aside: this is an increasingly common happening among companies and among research organizations. The spike in data is so profound.

Quite clearly in that circumstance, there was no way those workstations could cope with 4TB, nor could the company's network. So we designed a system for them that allows them to have those remote legacy workstations, have the users there, send instructions into a SAN (Storage Area Network) to cause the compute server to compute the data, then the visualization piece of the compute server visualizes that data, then we strip the pixels from the data and stream the pixels in real-time back to the remote user on the legacy workstation. They have full interactivity not only with the data set, but the computation of that data, from a legacy workstation — stressing neither the network nor the workstation's capacity.

In that circumstance, visualization is a critical tool for working with big data. More and more, as people look at these data spikes, even if you are able to get the data moved — which is becoming increasingly impossible — if the data or the data results are being expressed alpha-numerically, it's going to take you too long to read it. Ask a big data question … you get bigger data answer frequently. And if it takes you six months to read the answer …

If you can look at it visually, you often can understand it in a fraction of the time it would take you to internalize the information if it's expressed to you alpha-numerically. We believe that kind of infrastructure is critical for all sorts of users, in all sorts of places, working on all sorts of devices, with all sorts of OS's. It's SGI's role to bring that core power to Grid installations so that people at the various points along the Grid can have access to it.

Gt: Finally, I want to go back, for a few questions, to SGI's groundbreaking demonstration at Supercomputing '97. What kind of effect did it have on the Grid movement? Did it add an element of legitimacy?

STEWART: I think it certainly got Grid going and began a process of people seeing that there is a possibility to design this very different kind of infrastructure. I think that, in truth, if the community could have kept the momentum of that activity in 1997, we'd be further ahead today. I think we got sidetracked for a number of years, particularly in North America, with the cycle scavenging model as a single approach to Grid computing. I'm happy to say that single approach has very much ended.

While going around and doing cycle scavenging is still a very legitimate part of Grid, it's no longer seen as the grid. People are recognizing that Grid users should have access to a variety of different devices and a variety of different kinds of tools to work.

So, I think that 1997 was critical. I just wish we could have maintained the momentum that [the demo in] 1997 started, and we might be further ahead today. Things have changed dramatically in the last year to two years and focused much more on the building of the kind of infrastructure that's required to deal with big data.

Gt: That was almost seven-and-a-half years ago, an eternity in information technology, and a whole lot has changed with Grid since then. What do you see as some of the biggest differences?

STEWART: Grids are now deployed in working environments — there are lots of Grids. I would characterize the Grid as having three phases so far. From 1997 until about 2001, you were looking at Grids deployed for research on Grids. Starting around 2001, you increasingly saw Grids deployed in a research environment to serve research goals of multiple disciplines. We moved away from the Grid being the object of the research to the Grid being a tool to enable research. Starting roughly around late 2003-2004, we really began to see a major ramp-up of Grids being installed in enterprise situations and in corporate situations.

Certainly, I might comment with one other hat that I wear as co-chair of the Plenary Program Committee at GGF. Our Plenary Program at GGF12 in Brussels last September was quite extraordinary [in regard to] the number of companies that turned up at event that either already had Grids or were seriously looking at installing Grids and came to find out about it. There was nothing like that attendance previously. If you go back to GGF in 2002, there would be no one there but vendors and researchers. By now, we believe we are seeing a strong corporate engagement in the whole issue of Grid.

Gt: My final question is: If you had a crystal ball, what do you think you would see in another seven-and-a-half years, in 2012? Where will Grid be, and what role will SGI have in helping it get there?

STEWART: I very much see Grid in this context: Starting in the middle of the 18th century, right up well through the 20th century, we built evermore elaborate distribution mechanisms, or infrastructures for distribution, in order to move raw materials, processed materials and finished products. That was the absolute foundation of the industrial economy. We began, sometime late in the 20th century, to begin creating the infrastructure for the knowledge economy. Data is the raw material for the knowledge economy. Grid is the nascent, or the beginning, of that infrastructure that is going to allow us to move data from data to information to knowledge and, therefore, to value.

I would say that we are going to increasingly see infrastructure built around the principles that are related to Grid computing that enable users in every conceivable location to have access to the tools that they need for data. If I chose to, I could drive about a mile from my house and be able to buy a lemon picked off a tree in California. We have the infrastructure in place to make it possible for me to have that in the middle of winter in Toronto. We're going to see the kind of infrastructure that will make it possible for me, regardless of where I am, to be able to access the power to deal with the data that I need in order to be a knowledge worker.

Gt: Is there anything else you would like to add in regard to this announcement or SGI's Grid strategy in general?

STEWART: I believe that SGI will always be looking forward to your 2012 date. SGI will always be there, designing the tools that are ready to deal with that next spike in the volumes of data people have to work with. We're not going to be down at the commodity level, and we're not going to be there for the problems that are already solved. We are going to be there for the people who are tackling the next data spike.

To read the release cited in this article, please see “New Solutions Extend SGI'S Drive to Advance Grid Computing” in the issue of GRIDtoday.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This