Advancing the Power of Visualization

By Nicole Hemsoth

December 2, 2005

Finding “affordable visualization” with enough scalable horsepower to solve HPC's most demanding problems is often impossible for scientists and engineers who need visualization capabilities to analyze large data sets. Working to improve the accessibility and affordability of visualization solutions, HP introduced (November 15, 2005) the HP Scalable Visualization Array (SVA), a high-end scalable visualization solution that completes the company's Unified Cluster Portfolio's integration of computation, data management and visualization in a single, integrated cluster environment.

“Visualization,” explains Steve Briggs, HPCD's SVA product marketing manager, “is a critical capability enhancing the productivity and performance of HPC environments. To be of significant value to the HPC customer, visualization must be sharable, scalable, accessible, and affordable — attributes that were missing from the market until now.” 

HPCwire: What business problems are you solving with this product?

Briggs: Traditional large-scale visualization solutions are too expensive to maintain and upgrade and are built on proprietary technology. Small-scale visualization solutions are limited to single workstations that feature large memory but bounded rendering speeds. We believe high-performance computing customers – using visualization for oil and gas, scientific research, simulation, data mining applications – want Linux cluster capacity combined with the capability to run huge data sets at an affordable price. The SVA does that.

HPCwire: Hasn't visualization always been around for clusters? What's new about this?

Briggs: Clustered visualization is relatively new. In fact, it wasn't until the 1990s that WireGL addressed the rendering portions of the problem. And the seminal paper on compositing, “Parallel Volume Rendering Using Binary-Swap Image Composition” [Ma et al. 1994], was published just 11 years ago.

What's new is that HP's SVA is designed to do for high-performance visualization what clusters have done for supercomputing, which is make it affordable and accessible. Our solution distributes the rendering and provides for parallel compositing that eliminates bottlenecks that impede visualization and under-utilize the rendering engines. The use of industry-standard components drives affordability. And, as part of the Unified Cluster Portfolio, we complement the visualization technology with tools, applications and support to ensure successful production deployment.

Equally important to customers is the availability of applications that can take advantage of the visualization cluster technology. Applications  such as Wolfram Research's gridMathematica, Infiscape's VRJuggler, CEI's EnSight, Visenso's COVISE, open source Visualization TookKit (VTK), open source ParaView and others, offer users real comfort in working with well known, trusted applications, while obtaining performance that, just a few months ago, was either impossible to achieve or extremely expensive.

HPCwire: What products make up the SVA?

Briggs: The SVA consists of a cluster of HP workstations running Linux, commercially available, industry standard graphics cards and network adaptors, and an integrated software system. Each HP SVA node in the cluster contains a high performance HP workstation configured as either as a render or display node. Each workstation has one or more PCI Express 16x graphics cards. System software includes XC System Software, Scalable Visualization Array Software for configuration and job management, and optional HP StorageWorks Scalable File Share (HP SFS) software for scalable storage, and optional HP Remote Graphics Software.

One of SVA's strongest attributes is flexibility. The SVA scales to support diverse visualization workloads including multi-user, multi-tasking, and multi-sessions. It supports various visualization styles, models, and display systems including single screens, caves or walls.  By the way, the SVA technology can produce a vast, high resolution display wall of 100 million pixels and more.  HP's SVA works in three basic modes, as a cluster of independent workstations, as a cluster of synchronized workstations, as a sort-last compositing cluster, or as a combination of all three. Since the system offers job and resource management capability, customers aren't forced to choose a rigid configuration and can dynamically change their capabilities as their requirements change.

HPCwire: How is this solution different from competitive products?

Briggs: First, there is the flexibility that we just talked about. Second, the HP SVA is affordable because it is developed on state-of-the-art industry standards and open source technologies. This makes it the only off-the-shelf, high-performance visualization solution on the market. Third, the HP SVA is a true Linux cluster and integrates visualization, computation, and data management to solve the toughest HPC challenges, including offering remote visualization and visualization collaboration.

HPCwire: HP has a number of high performance visualization solutions, such as the SV7. What's new and different about SVA?

Briggs: The SVA builds on HP's decades of graphics expertise. (Editor's Note:  HP acquired Apollo Computer, one of the original graphics workstation vendors, in May 1989.) Visualization is not a case of one-solution-fits-all and HP is fortunate to have a broad portfolio of solutions. Our new graphics workstation, the xw9300 Workstation, is the first commercially available workstation to support two high-end 3D cards simultaneously. The HP Visualization Center, sv7, allows for accurate, real-time visualization of complete digital prototypes, permitting designers to visualize models with life-like 3D realism. So, while the sv7 has features suitable for CAD/CAM/workstation visualization, the SVA is suitable for a multi-user environment with graphic features suitable for scientific visualization, modeling and simulation, as well as geophysical exploration.

HPCwire: What are the scalability challenges with clusters and visualization? What's so different about SVA?

Briggs: From interconnects to pixel networks, these visualization clusters must not only scale but have the horsepower to scale quickly. The HP SVA scales up easily by simply increasing the number of nodes in the cluster. As an extension of standard Linux cluster, the SVA behaves like a cluster. The integrated clustering capability simplifies administration and improves distribution of resources to multiple users.

Scientists running visualization and computation applications generate huge datasets, requiring significant rendering power for visualizing that data. To handle those challenges, the HP SVA supports open source and commercial visualization software packages that drive high-resolution multi-tile displays and immersive environments, permit a mix of compute, render, and display nodes, and allows the use of computation steering to visualize while computing.

HPCwire: Is this only a solution for Linux clusters?

Briggs: At this time, yes. As Windows HPC becomes available, it'll make sense to support that, too.

HPCwire: How affordable is the SVA?

Briggs: The technology takes advantage of COTS components, open standards and open-source Linux – leveraging the tremendous advances made in readily available processors, graphics adaptors, interconnects, networks, clustering and middleware. The HP SVA costs about half of competitive products – and that includes installation. The architecture is modular, scalable and flexible, which not only gives significant technical benefits in solving grand-challenge problems but pragmatic benefits in terms of manageability, reliability, upgradeability and affordability.

HPCwire: What about performance?

Briggs: As you know, gaming is driving volumes of graphics cards. NVIDIA is using the physics of gaming to introduce improved floating point computations in the graphics cards. Graphics cards are doubling or tripling in performance every nine months making for vastly powerful performance in the HP SVA and guaranteeing improving performance in the future.

HPCwire: What's the downside? It can't be perfect.

Briggs: There are some rendering algorithms that are more suited to SMPs than clusters. HP, of course, offers a broad choice of SMPs as well as clusters.

HPCwire: Didn't you already announce this product a couple of years ago? What happened? Is this a different version?

Briggs:  You are probably thinking of the Sepia project, and about the work of HP's Collaboration and Competency Network (HP CCN). HP CCN is an on-going forum to facilitate wide-ranging collaboration, innovation, discovery, and competency-sharing between HP and high performance technical computing customers and partners. One of our topic areas is scalable visualization. There are opportunities at a variety of levels for interested parties to participate. For more information, visit http://www.hp.com/techservers/hpccn/index.html.

HPCwire: HP really promoted Sepia. What does Sepia add to this product? 

Briggs: Let me explain about Sepia. Sepia was a research program for the TriLabs (Los Alamos, Lawrence Livermore, and Sandia), the ASC (Advanced Simulation and Computing) VIEWS (Visual Interactive Environment for Weapons Simulation) program. It challenged us to develop technology based on industry standard components, which would solve the problems of visualization which, at that time, required massive proprietary SMP machines. As our research on Sepia progressed, so did the performance of graphics chips, graphics cards, processors, and networks. Consequently, HP's development focus shifted to compositing with other technologies and developing industry-standard APIs for advanced compositing functions. CPU/GPU compositing handles spatial tiling, depth compositing, and alpha blending, meeting many of our customers' needs.

Eliminating the dedicated Sepia hardware card resulted in significant cost savings for customers. That said, our research on Sepia resulted in software and algorithms which helps to increase the SVA performance.

HPCwire: What are the HPC trends in visualization? What can we count on?

Briggs: What is true for HPC is true for visualization – that is “if more is better, too much is just right.” You can count on increased data sets, increased computational capacity, and an increased need to visually interpret terabytes to petabytes of data. That's why it is important to boost the value of visualization through increasing real-time interactivity, scalability, accessibility, and affordability.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

House Bill Seeks Study on Quantum Computing, Identifying Benefits, Supply Chain Risks

May 27, 2020

New legislation under consideration (H.R.6919, Advancing Quantum Computing Act) requests that the Secretary of Commerce conduct a comprehensive study on quantum computing to assess the benefits of the technology for Amer Read more…

By Tiffany Trader

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to have bipartisan support, calls for giving NSF $100 billion Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers in Neuroscience this month present IBM work using a mixed-si Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even in the U.S. (which has a reasonably fast average broadband Read more…

By Oliver Peckham

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and our HPC family. Rich was born in Milwaukee, Wisconsin on April 12, 1962. His Read more…

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the dominant primate species, with the neanderthals disappearing b Read more…

By Oliver Peckham

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This