Walter Stewart on SGI’s Role in Ever-Evolving World of Grid

By By Derrick Harris, Editor

March 14, 2005

A lot has changed since 1997, when SGI put on the first public Grid demonstration at the Supercomputing show. Walter Stewart, SGI's business development manager for Grid, spoke with GRIDtoday editor Derrick Harris about just what has changed since then, as well about the company's tactics in the battle against increasingly large data spikes.

GRIDtoday: How do the new products mentioned in SGI's current release [the Altix 1350, Altix Hybrid Cluster, InfiniteStorage Total Performance 9700 and Silicon Graphics Prism] advance SGI's Grid strategy?

WALTER STEWART: We have, for some considerable time, ensured that any new product that we bring out is Grid-enabled. We have been looking to bring out products that bring a unique functionality to Grids, and we believe that these four products advance the kinds of functionalities that SGI is able to make available to people who are operating Grids. Particularly in the case of the 1350, we are bringing in functionality at a much more attractive price point than has been available before.

I think that speaks to SGI's overall Grid position. We're out there to be a toolmaker for the Grid and to make sure that the kind of power SGI brings to stand-alone compute facilities is available to hugely distributed users.

Gt: Do you know of any projects off-hand that are using or planning on using any of these new solutions?

STEWART: Because we've only just released them, I'm not aware of any that are looking at them for Grid at the moment. Certainly, some products from the Altix family have already been installed in major Grid installations around the world. We've had circumstances where we've had customers who are not interested in large, shared-memory machines, but who are interested in what might be described as “robust node clusters” for their Grids, and that's precisely what the Altix 1350 addresses. We are certainly very much involved with the Altix family in a number of major Grid installations around the world.

Gt: Which leads to my next question. There are certainly some well-established projects currently powered by SGI solutions, including the TeraGyroid, COSMOS and SARA projects. Could you talk a little about how SGI products are being used in these, and other, Grid projects?

STEWART: One interesting one, in your own country, is that we installed at the beginning of the year a 1,000-plus processor machine at NCSA, which will be one of the resources on the TeraGrid. This is, as far as I'm aware, the first shared-memory resource that has been available to TeraGrid users. So that's one very recent, and North American example.

I think I'm right in saying that it's next month that we're installing another 1,600-plus processor machine at the Australian Partnership for Advanced Computing, which will become a major resource on the Australian Grid.

COSMOS Grid has been around for some time, and we've been through a couple of generations with COSMOS Grid. We first installed Origin there, and have subsequently installed Altix. This is all because the COSMOS Grid people are in the business of setting up the data environment, including processing and visualization, in order to be ready for the data that will come flooding in from the Planck satellite in 2007. This is an example of bringing real power to Grids with a very strong emphasis on a very close connection among compute, visualization and data management.

Gt: SGI is focused on addressing four primary challenges of Grid, and I want to talk about two in particular. First: Why is security such a big issue in Grid computing, what are some of the major security issues and what is SGI doing to improve Grid security?

STEWART: We've been doing a lot of work with the open source community in transferring a lot of IP from our experience with our own operating system, IRIX. There are some security issues that we're hopeful will be picked up by the open source community. Because a lot of our security work with IRIX was right in the OS, we feel obliged to work with the open source community, and move at their speed, on the introduction of some of those attributes.

One area that isn't talked about in security, or security-related issues, that SGI is very preoccupied with is the whole issue of versioning. It's one thing to talk about the security of data, it's another thing to talk about the integrity of data. With our CXFS storage-area network over a wide-area network on the Grid, we are solving the problem of making multiple copies of data. So if you're in San Diego and I'm in Toronto, we could be working on the same data set without having to make a copy for each of us. Therefore, we can be confident that the data I'm working with is the same as you are because we're using the same copy.

We also keep a watchful eye on the work that goes on in the standards bodies like GGF.

Gt: I haven't heard a lot of companies state their dedication to the cause of visualization capabilities on Grid networks. Can you give me a little more detail on why this is so important to SGI?

STEWART: It's important to SGI because we think it's important to the world of Grid and the world of next-generation computing. Let me give you an example. We work with an engineering firm that was working in a fairly conventional IT environment where they had a number of workstations around the company, in a few locations, and they were copying files from workstation to workstation as different people had to work on them. Those files were in the neighborhood of 200GB, and it was taking about three hours on their network to move the files. But then they came to us and said that they were going to have a problem because their next data set was going to be a terabyte in size, and that was probably going to take something in the order of 22 hours to move — that was not going to be acceptable. We said, “Well, we wouldn't worry about it anyway if we were you.”

And they said, “Why is that?”

“Because you can't load it on the workstation anyway. It'll crash it.”

Before we presented the solution to them, they came back and said they had made a mistake. It was, in fact, not going to be 1TB; it was going to be 4TB. I might say this as an aside: this is an increasingly common happening among companies and among research organizations. The spike in data is so profound.

Quite clearly in that circumstance, there was no way those workstations could cope with 4TB, nor could the company's network. So we designed a system for them that allows them to have those remote legacy workstations, have the users there, send instructions into a SAN (Storage Area Network) to cause the compute server to compute the data, then the visualization piece of the compute server visualizes that data, then we strip the pixels from the data and stream the pixels in real-time back to the remote user on the legacy workstation. They have full interactivity not only with the data set, but the computation of that data, from a legacy workstation — stressing neither the network nor the workstation's capacity.

In that circumstance, visualization is a critical tool for working with big data. More and more, as people look at these data spikes, even if you are able to get the data moved — which is becoming increasingly impossible — if the data or the data results are being expressed alpha-numerically, it's going to take you too long to read it. Ask a big data question … you get bigger data answer frequently. And if it takes you six months to read the answer …

If you can look at it visually, you often can understand it in a fraction of the time it would take you to internalize the information if it's expressed to you alpha-numerically. We believe that kind of infrastructure is critical for all sorts of users, in all sorts of places, working on all sorts of devices, with all sorts of OS's. It's SGI's role to bring that core power to Grid installations so that people at the various points along the Grid can have access to it.

Gt: Finally, I want to go back, for a few questions, to SGI's groundbreaking demonstration at Supercomputing '97. What kind of effect did it have on the Grid movement? Did it add an element of legitimacy?

STEWART: I think it certainly got Grid going and began a process of people seeing that there is a possibility to design this very different kind of infrastructure. I think that, in truth, if the community could have kept the momentum of that activity in 1997, we'd be further ahead today. I think we got sidetracked for a number of years, particularly in North America, with the cycle scavenging model as a single approach to Grid computing. I'm happy to say that single approach has very much ended.

While going around and doing cycle scavenging is still a very legitimate part of Grid, it's no longer seen as the grid. People are recognizing that Grid users should have access to a variety of different devices and a variety of different kinds of tools to work.

So, I think that 1997 was critical. I just wish we could have maintained the momentum that [the demo in] 1997 started, and we might be further ahead today. Things have changed dramatically in the last year to two years and focused much more on the building of the kind of infrastructure that's required to deal with big data.

Gt: That was almost seven-and-a-half years ago, an eternity in information technology, and a whole lot has changed with Grid since then. What do you see as some of the biggest differences?

STEWART: Grids are now deployed in working environments — there are lots of Grids. I would characterize the Grid as having three phases so far. From 1997 until about 2001, you were looking at Grids deployed for research on Grids. Starting around 2001, you increasingly saw Grids deployed in a research environment to serve research goals of multiple disciplines. We moved away from the Grid being the object of the research to the Grid being a tool to enable research. Starting roughly around late 2003-2004, we really began to see a major ramp-up of Grids being installed in enterprise situations and in corporate situations.

Certainly, I might comment with one other hat that I wear as co-chair of the Plenary Program Committee at GGF. Our Plenary Program at GGF12 in Brussels last September was quite extraordinary [in regard to] the number of companies that turned up at event that either already had Grids or were seriously looking at installing Grids and came to find out about it. There was nothing like that attendance previously. If you go back to GGF in 2002, there would be no one there but vendors and researchers. By now, we believe we are seeing a strong corporate engagement in the whole issue of Grid.

Gt: My final question is: If you had a crystal ball, what do you think you would see in another seven-and-a-half years, in 2012? Where will Grid be, and what role will SGI have in helping it get there?

STEWART: I very much see Grid in this context: Starting in the middle of the 18th century, right up well through the 20th century, we built evermore elaborate distribution mechanisms, or infrastructures for distribution, in order to move raw materials, processed materials and finished products. That was the absolute foundation of the industrial economy. We began, sometime late in the 20th century, to begin creating the infrastructure for the knowledge economy. Data is the raw material for the knowledge economy. Grid is the nascent, or the beginning, of that infrastructure that is going to allow us to move data from data to information to knowledge and, therefore, to value.

I would say that we are going to increasingly see infrastructure built around the principles that are related to Grid computing that enable users in every conceivable location to have access to the tools that they need for data. If I chose to, I could drive about a mile from my house and be able to buy a lemon picked off a tree in California. We have the infrastructure in place to make it possible for me to have that in the middle of winter in Toronto. We're going to see the kind of infrastructure that will make it possible for me, regardless of where I am, to be able to access the power to deal with the data that I need in order to be a knowledge worker.

Gt: Is there anything else you would like to add in regard to this announcement or SGI's Grid strategy in general?

STEWART: I believe that SGI will always be looking forward to your 2012 date. SGI will always be there, designing the tools that are ready to deal with that next spike in the volumes of data people have to work with. We're not going to be down at the commodity level, and we're not going to be there for the problems that are already solved. We are going to be there for the people who are tackling the next data spike.

To read the release cited in this article, please see “New Solutions Extend SGI'S Drive to Advance Grid Computing” in the issue of GRIDtoday.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even in the U.S. (which has a reasonably fast average broadband Read more…

By Oliver Peckham

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and our HPC family. Rich was born in Milwaukee, Wisconsin on April 12, 1962. His Read more…

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the dominant primate species, with the neanderthals disappearing b Read more…

By Oliver Peckham

Discovering Alternative Solar Panel Materials with Supercomputing

May 23, 2020

Solar power is quickly growing in the world’s energy mix, but silicon – a crucial material in the construction of photovoltaic solar panels – remains expensive, hindering solar’s expansion and competitiveness wit Read more…

By Oliver Peckham

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia said revenues for the period ended April 26 were up 39 perc Read more…

By Doug Black

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

TACC Supercomputers Delve into COVID-19’s Spike Protein

May 22, 2020

If you’ve been following COVID-19 research, by now, you’ve probably heard of the spike protein (or S-protein). The spike protein – which gives COVID-19 its namesake crown-like shape – is the virus’ crowbar into Read more…

By Oliver Peckham

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Wafer-Scale Engine AI Supercomputer Is Fighting COVID-19

May 13, 2020

Seemingly every supercomputer in the world is allied in the fight against the coronavirus pandemic – but not many of them are fresh out of the box. Cerebras S Read more…

By Oliver Peckham

Startup MemVerge on Memory-centric Mission

May 12, 2020

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved datacenter systems Read more…

By Doug Black

In Australia, HPC Illuminates the Early Universe

May 11, 2020

Many billions of years ago, the universe was a swirling pool of gas. Unraveling the story of how we got from there to here isn’t an easy task, with many simul Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This