Visit additional Tabor Communication Publications
April 11, 2008
In the 1980s and early 1990s, if you were doing anything serious in computer graphics you were doing it with SGI gear. Then a series of strategic missteps and the emergence of incredibly powerful, cheap graphics cards for PCs made the company's graphics lines irrelevant, and for nearly a decade they've survived as a server company. The SGI Virtu line announced this week steers SGI back into graphics for what it says is the long haul. But is there a market left to capture?
I spoke to Tom Reed, SGI's Director of Visualization, about the new offering. Tom's take on the Virtu announcement is that the HPC market is finally ready to once again make serious investments in visualization. "For a long time there has been a lot of focus [in HPC] on getting clusters right. Now we've reach a spot were we're on good footing with clusters and we can start addressing other issues in the high performance ecosystem." He added, "Visualization is the long pole in the performance computing tent now."
SGI has made wobbles in and out of the graphics market since the end of their market dominance around the start of this decade, but those efforts have never born fruit. Soon after Bo Ewald returned to the CEO post at SGI he started making public comments about SGI's return to high end graphics. On September 27 last year he remarked to a packed room at IDC's HPC User Forum, "We will be back in the visual supercomputing business," and then he added this quotable quote, "It was really stupid for the company to stop doing visualization types of things."
This week the company confirmed that Bo wasn't just talking off script at public events. Although the company's press release on Tuesday focused on the Virtu VN200 graphics server, the offering also includes a graphics workstation -- the Virtu VS series -- and the Wide Area Visual Environment, or WAVE, to support the remote visualization needs of many customer sites.
First, the VN200. Built around two quad-core Xeons, the VN200 node is the server side of the SGI visual supercomputing equation. These nodes can run SLES, Red Hat, or Windows (as can the VS), and up to five VN200 nodes can be integrated in a single 4U enclosure. Multiple enclosures can be racked together, and the whole thing can be clustered right into your Altix big iron. Graphics are provided by NVIDIA Quadro FX graphics cards, one to a node.
This integration of compute and visualization gear is a key driver in SGI's Virtu strategy. Again according to Reed, "We are ready to re-attach visualization back to computing... to bring visualization back as an integral part of the computing experience." Despite the goal, this is still version 1.0 of the offering. The Virtu nodes don't have NUMAlink capability, so they don't share memory with each other or with the processors in the Altix side of the system. The VN200 is really a clustered graphics solution, all the way down to the node, and data for any cooperative rendering that's done beyond the cores available in a node has to be managed explicitly using distributed memory semantics. Reed did indicate that they are looking to extend the shared memory model to Virtu nodes in the future.
This will be an important differentiator in a space where SGI is competing for graphics cluster business with companies like GraphStream, Verari, HP, and others, all of whom are essentially constructing solutions out of the same parts. Right now SGI says that a big part of their value add in the VN is that they have created a fully integrated solution, where drivers for the graphics cards and the IB ports don't interfere with each other and everything "just works," and VN200 nodes can be integrated with the HPC nodes generating the data.
While SGI hopes its customers start buying VN nodes with all of their Altix gear, the VN customer doesn't have to integrate it with an Altix setup, or even have an Altix at all. VN nodes work just fine as the standalone centerpiece of an enterprise visualization solution.
Another area where improvements will be critical for product differentiation is in the rendering pipeline itself. According to Reed, SGI is going to add in some of VizServer's collaboration technologies, strengthening the remote and collaborative aspects of the offering.
Though not mentioned in the information released for the launch, if you look at the Virtu offering you'll notice that SGI has added a visual workstation to it's lineup, the Virtu VS line of machines. Reed describes the VS line as important only for a few niche customers in some fairly specialized situations, and not a product they expect to focus on for most customers. According to him, "SGI is really not back in the [general purpose] workstation business. These systems are used almost exclusively as platforms for building visualization solutions for customers who need four or more graphics pipes in a single system (driving 4K projection systems, collaborative team rooms, etc)."
I was interested to know whether SGI is actually making this new Virtu gear. According to Reed, both the VS and the VN are manufactured outside of SGI, though he declined to disclose who those partners were. A source close to the HPC industry in Austin, Texas, identified one of the VS manufacturers as BOXX Technologies, an Austin-based company that specializes in making, according to the company's web site, "high-performance computing platforms for Visual Effects (VFX) professionals."
According to the press release VN nodes start at $10,575; pricing for the VS units wasn't available for this story. They don't have customers to talk with the press yet, but Reed reports that there are at least three VN200 systems being beta tested by customers.
So, nearly a decade after SGI lost or sold much of its critical IP (google for the NVIDIA patent bargain and the Microsoft IP sale), they are trying to get back into what it calls the visual supercomputing business. Right now their offering appears to be incremental -- commodity-based graphics capabilities in clusters, and commodity-based graphics in workstations, with some software to glue it all together. Because of who they are, they are likely to attract some customer interest on the basis of their heritage. In fact, the press release mentions SGI's graphics legacy numerous times.
If they want to recapture the visual supercomputing business this is a reasonable first step. But it's not enough to do more than get people's attention.
The hard problem is that visual supercomputing really doesn't exist as a distinct market anymore, thanks largely to the success of the commodity video card business. In order to precipitate this market back out of the commodity graphics solution it has dissolved into, the company needs to focus hard on leveraging the features that make its computational gear unique while layering on strong, value added visualization-specific features for petascale datasets. They need to create a graphics offering that is unique, and gives users something they cannot get by simply stitching together free software and gear they can buy anywhere.
SGI is still positioned to do this. There is a lot of valuable IP that has remained with the company, in products like Vizserver and others, that they can bring to the challenges we are facing today in analysis of large data. Tom Reed is passionate when he talks about SGI's commitment to long term leadership in visualization, with this first step creating a platform for SGI to innovate on with new advances in data management and fundamental approaches to data visualization.
This is SGI, and this is an industry that they once dominated. I believe that if anyone can do it, they can.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.