Optical Network Is Key to Next-Generation Research Cyberinfrastructure

By Tiffany Trader

June 11, 2008

At TeraGrid ’08 Conference, UC San Diego’s Smarr urges university campuses to remove network bottlenecks to supercomputer users

The director of the California Institute for Telecommunications and Information Technology (Calit2), a partnership of UC San Diego and the UC Irvine, said today that all the pieces are in place for a revolution in the usability of remote high performance computers to advance science across many disciplines. He urged early adopter application scientists to drive the creation of end-to-end dedicated lightpaths connecting remote supercomputers to their labs, greatly enhancing their local capability to analyze visually massive datasets generated by TeraGrid’s terascale to petascale computers.

In a featured keynote today at the TeraGrid ’08 Conference being held in Las Vegas this week, Calit2 Director Larry Smarr said “the last ten years have established the state, regional, national, and global optical networks needed for this revolution, but the bottleneck is on the user’s campus.” However, as a result of research funded by the National Science Foundation (NSF), there now is a clear path forward to removing this last bottleneck.

This opens the possibility for end users of the NSF’s TeraGrid to begin to adopt these optical network technologies. The TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities from the eleven partner sites around the country.

“The NSF-funded OptIPuter project [www.optiputer.net] has been exploring for six years how user-controlled, wide-area, high-bandwidth lightpaths — termed lambdas — on fiber optics can provide direct uncongested access to global data repositories, scientific instruments and high performance computational resources from the researchers’ Linux clusters in their campus laboratories,” said Smarr. “This research is now being rapidly adopted because universities are beginning to acquire lambda access through state or regional optical networks interconnected with the National LambdaRail, the Internet2 Dynamic Circuit Network, and the Global Lambda Integrated Facility.”

The OptIPuter project, led by Smarr, is not designed to scale to millions of sites like the normal shared Internet, but to create private networks with much higher levels of data volume, accuracy, and timeliness for a few data-intensive research and education sites. Led by Calit2, the San Diego Supercomputer Center (SDSC), and the University of Illinois at Chicago’s Electronic Visualization Laboratory (EVL), OptIPuter ties together the efforts of researchers from over a dozen campuses.

The OptIPuter uses dedicated lightpaths to form end-to-end uncongested 1- or 10-Gbps Internet protocol (IP) networks. The OptIPuter’s dedicated network infrastructure – and supporting software – has a number of significant advantages over shared Internet connections, including high bandwidth, controlled performance (no jitter), lower cost per unit bandwidth, and security. “The OptIPuter essentially completes the Grid program,” said Smarr. “In addition to allowing the end user to discover, reserve, and integrate remote computers, storage, and instruments, the OptIPuter enables the user to do the same for dedicated lambdas, creating a high-performance LambdaGrid.”

In his talk, Smarr described how the user-configurable OptIPuter global platform is already being used for research in collaborative work environments, digital cinema, biomedical instrumentation, and marine microbial metagenomics. He issued a challenge to the TeraGrid users to begin to adopt this technology to support remote use of the TeraGrid resources.

“OptIPuter technologies can enhance the ability of scientists to use remote high-performance computing resources from their local labs, particularly applications with persistent large data flows, real-time visualization and collaboration, and remote steering,” Smarr said.

A key OptIPuter technology, the OptIPortal, was prototyped by EVL and developed by Calit2 under the NSF-funded OptIPuter partnership. The OptIPortal is a networked and scalable, high-resolution LCD tiled display system, driven by a PC graphics cluster. Designed for the user’s laboratory, each OptIPortal can be constructed with commodity commercial displays and processors. While most of the PC clusters run Linux, there are some that run on Mac (Calit2@UC Irvine and UCSD’s Scripps Institution of Oceanography) or on Windows (UCSD’s National Center for Microscopy and Imaging Research) clusters.

“OptIPortals are the appropriate termination device for 10Gbps lambdas, allowing the end user to choose the right amount of local storage, compute, and graphics capacity needed for their application,” said Smarr. “In addition, the tiled walls provide the scalable pixel real estate necessary to analyze visually the complexity of supercomputing runs.”

The OptIPuter project prefers OptIPortal clusters to run on SDSC’s Rocks, an open-source Linux cluster distribution that enables end users to easily build computational clusters, grid endpoints and visualization tiled-display walls. Rocks is developed under an NSF-funded SDCI project led by SDSC’s Philip Papadopoulos, who is also a co-principal investigator on the OptIPuter project. There are currently over 1,300 registered clusters running Rocks, providing a global and vibrant open-source software community. The Rocks “Rolls” provide a convenient method of distribution of software innovations coming from community members. HIPerSpace Wall - Click to see full size image
OptIPortals range in size from four to 60 tiles, offering screen resolutions ranging from 8 million pixels to the nearly-¼-billion-pixel HIPerSpace wall — the highest-resolution display system in the world, located in the Calit2 building on the UCSD campus. OptIPortals do not need to be restricted to planar tiled walls, Smarr said. Smarr showed pictures of Calit2’s StarCAVE immersive environment driven by 34 high-definition projectors, and a 60-LCD semi-cylindrical tiled wall autostereo Varrier display, both providing three-dimensional virtual reality, driven by the same type of Linux clusters that drive the HIPerWall, all connected at multiples of 10Gbps to the OptIPuter.

To handle multi-gigabit video streams, OptIPuter researchers at EVL developed the Scalable Adaptive Graphics Environment (SAGE), specialized graphics middleware that supports collaborative scientific visualization environments with potentially hundreds of megapixels of contiguous display resolution. In collaborative scientific visualization, it is crucial to share high-resolution imagery as well as high-definition video among groups of collaborators at local or remote sites.

SAGE enables the real-time streaming of extremely high-resolution content — such as ultra-high-resolution 2D and 3D computer graphics from remote rendering and compute clusters and storage devices, as well as high-definition video camera output — to scalable tiled display walls over high-speed networks. SAGE serves as a window manager, allowing users to move, resize, and overlap windows as easily as on standard desktop computers. SAGE also has standard collaboration desktop tools, such as image viewer, video player, and desktop sharing capabilities, enabling participants to resize, pan, zoom and move through the data.

In addition to SAGE other windowing software environments have been developed by research groups that were not part of the original NSF proposal, including the Calit2 lab of UCSD Professor Falko Kuester, developer of Cluster CGX, which allows OpenGL applications to be displayed on a visualization cluster like a tiled display.

Although scalable visualization displays have been under development for over a decade, first as arrays of projectors, the use of commodity hardware and open-source software in the OptIPortal makes this visualization power affordable to individual researchers. The typical cost of an N-tiled wall is about the same as N/2 deskside PCs. As a result, adoption of OptIPortals has been rapid over the past two years. Besides the United States there are OptIPortals installed in Australia, Taiwan, China, Japan, Korea, Canada, the UK, the Netherlands, Switzerland, the Czech Republic, and Russia, as well as a number of corporations.

However, there has been a critical “missing link” blocking widespread adoption of the OptIPuter/ OptIPortal metacomputer: few campuses have installed the optical fiber paths needed to connect from the regional optical network campus gateway to the end user. Smarr quoted NSF Director Arden Bement, who three years ago said prophetically: “Those massive conduits [e.g., NLR lambdas] are reduced to two-lane roads at most college and university campuses. Improving cyberinfrastructure will transform the capabilities of campus-based scientists.”

To make effective use of the 10Gbps lightpaths from the TeraGrid resources to the campus gateways, Smarr said, “the user’s campus must invest in the equivalent of city ‘data freeway’ systems of switched optical fibers connecting the campus gateway to specific buildings and inside the buildings to the user’s lab.”

A full scale experiment of this vision is underway at UCSD with funds provided by the campus and an NSF-funded Major Research Instrumentation grant called Quartzite, which has SDSC’s Papadopoulos as PI and Calit2’s Smarr as one of the co-PIs. The Quartzite optical infrastructure includes a hybrid packet-circuit switched environment, interconnecting over 45 installed 10Gbps channels crisscrossing the UC San Diego campus, with 15 more planned by the end of this year. More than 400 endpoints are connected to Quartzite through access or direct connection to the core switch. Geographically, these are located in seven different buildings, including 17 laboratories within these buildings. Large projects (CAMERA, CineGrid) use Quartzite directly.

The Quartzite switching complex is able to switch packets, wavelengths or entire fiber paths, allowing fast configuration, under software control, of the different types of network layouts and capabilities required by the end user. This optical complex will provide this year an aggregate bandwidth of ~½ Terabit/sec from dedicated lightpaths coming into a central, reconfigurable switching complex and from there connecting to UCSD researchers. This testbed also enables a broad set of “Green Cyberinfrastructure” research projects to be conducted on a campus scale. As a result, we can experiment at UCSD with one model of the “campus of the future,” from which robust solutions can be provided to other interested campuses.

“Quartzite provides the ‘golden spike’ which allows completion of end-to-end 10Gbps lightpaths running from TeraGrid sites to the remote user’s lab,” said Smarr, adding: “Like the OptIPortal, Quartzite was designed using commercial technologies that can be easily installed on any campus.”

With this complete end-to-end OptIPuter now in hand, the stage is set for a wide variety of applications to be developed over this global high performance cyberinfrastructure. “When we were conceptualizing the OptIPuter seven years ago, I always thought that remote supercomputer users would provide the killer applications,” said Smarr, the founding director in 1985 of the National Center for Supercomputing Applications (NCSA). “TeraGrid users are located in research campuses across the nation, but they all share the characteristic that they need to carry out interactive visual analysis of massive datasets generated by a remote supercomputer.”

Smarr showed a number of DoE, NASA, and NSF supercomputer centers that have large tiled projector walls located in the center for visual analysis of these complexities. “The time has come to take that capability out to end users in their labs with local OptIPortals connected to the supercomputer center using the OptIPuter,” said Smarr. “I believe that we will see early adopters step forward in the next year to set up prototypes of this cyberarchitecture.”

Smarr described the work of one such early adopter, Michael Norman, UCSD Professor of Physics, recently named SDSC’s Chief Scientific Officer. Norman is designing an OptIPortal in the new SDSC building, to be dedicated in October 2008, for use by his Laboratory for Computational Astrophysics. It will be connected over the UCSD optical complex described above to the TeraGrid 10Gbps backbone and National LambdaRail and used to visualize results from his cosmology simulations on the NSF’s Petascale Track II machines at the Texas Advanced Computing Center and at the University of Tennessee/Oak Ridge National Laboratory’s National Institute for Computational Sciences. Norman plans to stage and analyze the terabytes of data generated at SDSC, using the campus optical fiber network to move the data into specialized OptIPortals at Calit2, such as the StarCAVE and HIPerSpace wall.

To make this OptIPuter distributed analysis more efficient, EVL has developed LambdaRAM, which can prefetch data from disk storage and temporarily store it in the cluster’s Random Access Memory (RAM), masking the substantial disk I/O latency, and then move the data from this “staging” computer to the computer running the simulation. Smarr showed how NASA Goddard Space Flight Center in Maryland uses the OptIPuter and LambdaRAM to optimize the use of NLR for severe storm and hurricane forecasts carried out at the Project Columbia supercomputer at NASA Ames in Mountain View, California, and to zoom and pan interactively through ultra-high-resolution images on local OptIPortals at Goddard. EVL modified LambdaRAM so that it would work seamlessly with legacy applications to locally access large data files generated by the remote supercomputer.

Finally, Smarr described how, with the integration of high definition and digital cinema video streams, which easily fit inside a 10Gbps lightpath, the OptIPuter architecture is rapidly creating an OptIPlanet Collaboratory in which multiple scientists can analyze a complex dataset while seeing and talking to each other as if they were physically in the same room. Smarr showed photos of “telepresence” sessions in January and May 2008 where this was demonstrated on a global basis between Calit2 at UC San Diego and the 100-Megapixel ‘OzIPortal,’ constructed earlier this year at the University of Melbourne in Australia, connected over a transpacific gigabit lightpath on Australia’s Academic and Research Network (AARNet). “Petascale problems will require geographically distributed multidisciplinary teams analyzing enormous data sets — a perfect application of the OptIPlanet Collaboratory,” said Smarr.

In conclusion, Smarr said, “After a decade of research carried out at dozens of institutions, we are seeing the OptIPuter take off on a global basis. I look forward to working with many of the TeraGrid ’08 participants as they become early adopters of this innovative, high performance cyberinfrastructure — rebalancing the local analysis and network connectivity with the awesome growth NSF has made possible in the emerging petascale computers.”

—–

Source: Calit2 and SDSC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire