Back to the Future: SGI Returns to Visualization

By John E. West

April 11, 2008

In the 1980s and early 1990s, if you were doing anything serious in computer graphics you were doing it with SGI gear. Then a series of strategic missteps and the emergence of incredibly powerful, cheap graphics cards for PCs made the company’s graphics lines irrelevant, and for nearly a decade they’ve survived as a server company. The SGI Virtu line announced this week steers SGI back into graphics for what it says is the long haul. But is there a market left to capture?

I spoke to Tom Reed, SGI’s Director of Visualization, about the new offering. Tom’s take on the Virtu announcement is that the HPC market is finally ready to once again make serious investments in visualization. “For a long time there has been a lot of focus [in HPC] on getting clusters right. Now we’ve reach a spot were we’re on good footing with clusters and we can start addressing other issues in the high performance ecosystem.” He added, “Visualization is the long pole in the performance computing tent now.”

SGI has made wobbles in and out of the graphics market since the end of their market dominance around the start of this decade, but those efforts have never born fruit. Soon after Bo Ewald returned to the CEO post at SGI he started making public comments about SGI’s return to high end graphics. On September 27 last year he remarked to a packed room at IDC’s HPC User Forum, “We will be back in the visual supercomputing business,” and then he added this quotable quote, “It was really stupid for the company to stop doing visualization types of things.”

This week the company confirmed that Bo wasn’t just talking off script at public events. Although the company’s press release on Tuesday focused on the Virtu VN200 graphics server, the offering also includes a graphics workstation — the Virtu VS series — and the Wide Area Visual Environment, or WAVE, to support the remote visualization needs of many customer sites.

First, the VN200. Built around two quad-core Xeons, the VN200 node is the server side of the SGI visual supercomputing equation. These nodes can run SLES, Red Hat, or Windows (as can the VS), and up to five VN200 nodes can be integrated in a single 4U enclosure. Multiple enclosures can be racked together, and the whole thing can be clustered right into your Altix big iron. Graphics are provided by NVIDIA Quadro FX graphics cards, one to a node.

This integration of compute and visualization gear is a key driver in SGI’s Virtu strategy. Again according to Reed, “We are ready to re-attach visualization back to computing… to bring visualization back as an integral part of the computing experience.” Despite the goal, this is still version 1.0 of the offering. The Virtu nodes don’t have NUMAlink capability, so they don’t share memory with each other or with the processors in the Altix side of the system. The VN200 is really a clustered graphics solution, all the way down to the node, and data for any cooperative rendering that’s done beyond the cores available in a node has to be managed explicitly using distributed memory semantics. Reed did indicate that they are looking to extend the shared memory model to Virtu nodes in the future.

This will be an important differentiator in a space where SGI is competing for graphics cluster business with companies like GraphStream, Verari, HP, and others, all of whom are essentially constructing solutions out of the same parts. Right now SGI says that a big part of their value add in the VN is that they have created a fully integrated solution, where drivers for the graphics cards and the IB ports don’t interfere with each other and everything “just works,” and VN200 nodes can be integrated with the HPC nodes generating the data.

While SGI hopes its customers start buying VN nodes with all of their Altix gear, the VN customer doesn’t have to integrate it with an Altix setup, or even have an Altix at all. VN nodes work just fine as the standalone centerpiece of an enterprise visualization solution.

Another area where improvements will be critical for product differentiation is in the rendering pipeline itself. According to Reed, SGI is going to add in some of VizServer’s collaboration technologies, strengthening the remote and collaborative aspects of the offering.

Though not mentioned in the information released for the launch, if you look at the Virtu offering you’ll notice that SGI has added a visual workstation to it’s lineup, the Virtu VS line of machines. Reed describes the VS line as important only for a few niche customers in some fairly specialized situations, and not a product they expect to focus on for most customers. According to him, “SGI is really not back in the [general purpose] workstation business. These systems are used almost exclusively as platforms for building visualization solutions for customers who need four or more graphics pipes in a single system (driving 4K projection systems, collaborative team rooms, etc).”

I was interested to know whether SGI is actually making this new Virtu gear. According to Reed, both the VS and the VN are manufactured outside of SGI, though he declined to disclose who those partners were. A source close to the HPC industry in Austin, Texas, identified one of the VS manufacturers as BOXX Technologies, an Austin-based company that specializes in making, according to the company’s web site, “high-performance computing platforms for Visual Effects (VFX) professionals.”

According to the press release VN nodes start at $10,575; pricing for the VS units wasn’t available for this story. They don’t have customers to talk with the press yet, but Reed reports that there are at least three VN200 systems being beta tested by customers.

So, nearly a decade after SGI lost or sold much of its critical IP (google for the NVIDIA patent bargain and the Microsoft IP sale), they are trying to get back into what it calls the visual supercomputing business. Right now their offering appears to be incremental — commodity-based graphics capabilities in clusters, and commodity-based graphics in workstations, with some software to glue it all together. Because of who they are, they are likely to attract some customer interest on the basis of their heritage. In fact, the press release mentions SGI’s graphics legacy numerous times.

If they want to recapture the visual supercomputing business this is a reasonable first step. But it’s not enough to do more than get people’s attention.

The hard problem is that visual supercomputing really doesn’t exist as a distinct market anymore, thanks largely to the success of the commodity video card business. In order to precipitate this market back out of the commodity graphics solution it has dissolved into, the company needs to focus hard on leveraging the features that make its computational gear unique while layering on strong, value added visualization-specific features for petascale datasets. They need to create a graphics offering that is unique, and gives users something they cannot get by simply stitching together free software and gear they can buy anywhere.

SGI is still positioned to do this. There is a lot of valuable IP that has remained with the company, in products like Vizserver and others, that they can bring to the challenges we are facing today in analysis of large data. Tom Reed is passionate when he talks about SGI’s commitment to long term leadership in visualization, with this first step creating a platform for SGI to innovate on with new advances in data management and fundamental approaches to data visualization.

This is SGI, and this is an industry that they once dominated. I believe that if anyone can do it, they can.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Automated Optimization Boosts ResNet50 Performance by 1.77x on Intel CPUs

October 23, 2018

From supercomputers to cell phones, every system and software device in our digital panoply has a growing number of settings that, if not optimized, constrain performance, wasting precious cycles and watts. In the f Read more…

By Tiffany Trader

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about one of the great inspirational stories of these competitions. Read more…

By Dan Olds

NSF Launches Quantum Computing Faculty Fellows Program

October 22, 2018

Efforts to expand quantum computing research capacity continue to accelerate. The National Science Foundation today announced a Quantum Computing & Information Science Faculty Fellows (QCIS-FF) program aimed at devel Read more…

By John Russell

HPE Extreme Performance Solutions

One Small Step Toward Mars: One Giant Leap for Supercomputing

Since the days of the Space Race between the U.S. and the former Soviet Union, we have continually sought ways to perform experiments in space. Read more…

IBM Accelerated Insights

Join IBM at SC18 and Learn to Harness the Next Generation of AI-focused Supercomputing

Blurring the lines between HPC and AI

Today’s high performance computers are helping clients gain insights at an unprecedented pace. The intersection of artificial intelligence (AI) and HPC can transform industries while solving some of the world’s toughest challenges. Read more…

Democratization of HPC Part 3: Ninth Graders Tap HPC in the Cloud to Design Flying Boats

October 18, 2018

This is the third in a series of articles demonstrating the growing acceptance of high-performance computing (HPC) in new user communities and application areas. In this article we present UberCloud use case #208 on how Read more…

By Wolfgang Gentzsch and Håkon Bull Hove

Automated Optimization Boosts ResNet50 Performance by 1.77x on Intel CPUs

October 23, 2018

From supercomputers to cell phones, every system and software device in our digital panoply has a growing number of settings that, if not optimized, constrain  Read more…

By Tiffany Trader

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about o Read more…

By Dan Olds

Penguin Computing Launches Consultancy for Piecing AI Strategies Together

October 18, 2018

AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phas Read more…

By Tiffany Trader

When Water Quality—Not Quantity—Hinders HPC Cooling

October 18, 2018

Attention has been paid to the sheer quantity of water consumed by supercomputers’ cooling towers – and rightly so, as they can require thousands of gallons per minute to cool. But in the background, another factor can emerge, bottlenecking efficiency and raising costs: water quality. Read more…

By Oliver Peckham

Paper Offers ‘Proof’ of Quantum Advantage on Some Problems

October 18, 2018

Is quantum computing worth all the effort being poured into it or should we just wait for classical computing to catch up? An IBM blog today posed those questio Read more…

By John Russell

Dell EMC to Supply U Michigan’s Great Lakes Cluster

October 16, 2018

The University of Michigan (U-M) today announced Dell EMC is the lead vendor for U-M’s $4.8 million Great Lakes HPC cluster scheduled for deployment in first Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Nvidia Platform Pushes GPUs into Machine Learning, High Performance Data Analytics

October 10, 2018

GPU leader Nvidia, generally associated with deep learning, autonomous vehicles and other higher-end enterprise and scientific workloads (and gaming, of course) Read more…

By Doug Black

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Leading Solution Providers

HPC on Wall Street 2018 Booth Video Tours Playlist

Arista

Dell EMC

IBM

Intel

RStor

VMWare

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Aerodynamic Simulation Reveals Best Position in a Peloton of Cyclists

July 5, 2018

Eindhoven University of Technology (TU/e) and KU Leuven research group conducts the largest numerical simulation ever done in the sport industry and cycling discipline. The goal was to understand the aerodynamic interactions in the peloton, i.e., the main pack of cyclists in a race. Read more…

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This