The OptIPuter Gets Real

By Michael Feldman

January 27, 2006

Last week, the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2) and the J. Craig Venter Institute announced that they would collaborate to decipher the genetic code of the world's marine microbiological communities. This project, the Community Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analysis (CAMERA), will use the OptIPuter model developed at Calit2 as the architecture for its computational resources. According to Larry Smarr, Calit2 director and principal investigator for the CAMERA project, this work will represent “the first persistent application of the OptIPuter.”

Larry Smarr, Calit2 director and principal investigator for the CAMERA project.Named for its use of Optical networking, Internet Protocol, computer storage, processing and visualization technologies, the OptIPuter is an infrastructure that links computational resources over optical networks using the IP communication mechanism. The OptIPuter's central architectural element is optical networks, not computers. The goal of this architecture is to enable researchers who are generating large volumes of data to interactively visualize, analyze, and correlate their data from distributed sites.

Bringing it Together

“The OptIPuter from the beginning was driven by scientific applications — in particular biomedical imaging, earth and ocean science,” explained Smarr. “But it was really more for individuals who had some large datasets that we could play with — the scientific research project itself was the OptIPuter as computer science.”

The OptIPuter project, funded by the NSF, was in its fourth year of a five-year project. Smarr knew that he would have look for an opportunity in the scientific community to take the project to the next level. Coincidentally, the Gordon and Betty Moore Foundation was looking for just the type of computational resource provided by the OptIPuter.

“It just fortuitously happened that the Moore foundation asked me to become the principal investigator for [the CAMERA project] and for Calit2 to put this together at just about the time that we have would be looking for such a project in the scientific community and go the next step — proof-of-principle,” said Smarr. “So this is really good timing, from our point of view.”

It wasn't all just luck, however. Larry Smarr, as one of the luminaries in the field, is well known for his contributions to the information technology community, from his early involvement in the original Mosaic web browser at NCSA to his current work as the founding director of Calit2. David Kingsbury, the science program officer at the Moore foundation, was well-aware of Smarr's work.

“David has been driving computation biology for 20 years – first at the NSF, then with Chiron Corporation and then with the DOE for awhile,” said Smarr. “I've known about him since my NCSA days and we'd interacted in the past. It turns out he was also selected by the UC office of the president as one of the team of reviewers for the Calit2 proposal back in 2000. So he was aware of my track record both at NSCA and then at Calit2.

“David was looking for a place that wanted to live in the future — beyond the leading edge of technology, but was driven by science. He knew that one of the four major application areas for Calit2 was digitally enabled genomic medicine. As a result we had collected a number of leaders at Calit2 in computation biology and bioinformatics. We had a in-house capability in both leading edge information technology and computational biology. So this seemed like the right group for him.”

Prior to Calit2's involvement, the Moore foundation had been funding the Venter Institute for several years to collect marine microorganism and sequence their DNA. The Institute's marine expeditions were used to collect water samples from a wide variety of locations around the world. While at sea, the microbes are filtered from the seawater, and then frozen for transport back to the Venter labs in Rockville Maryland. After the samples are brought back to the Institute, the real fun begins.

Bacteria slide from Venter Institute’s Sorcerer II Expedition.“They shotgun sequence the whole lot so you end up with this very complex genomic map of a whole community of microorganisms, that have adapted to this local environment,” said Smarr. “Each sample can contain thousands of species.”

But that's the easy part. They also need to correlate what the environment was like when the sample was collected — the temperature, pH, salinity, as well as the local ocean context (obtained from NASA satellite images).

“So from a computer science point of view, you need to have a broad set of data types, large volumes of data, and a whole set of software tools that have to be applied to that data to get the science out,” explained Smarr. “There really wasn't any existing science complex that was set up to do this. So they literally wanted us to architect a new kind of science data server that would not just satisfy this particular metagenomics project but would be a 21st century architecture that would have five to ten years of legs on it.”

Smarr envisions the expansion of this technology in two directions. First, he believe this project will help to accelerate metagenomics research, not just for the marine environment, but for other microbial ecosystems as well. Second, as they prove their ability to support distributed teams in these virtual “collaboratories,” Smarr expects to see the technology translated into many other scientific disciplines, such as astronomy and chemistry.

“What is exciting about this is that it's taking both frontier science and combining it with frontier cyberinfrastructure,” said Smarr.

Metagenomics Unleashed

Beside basic scientific discovery, there are several of potential applications for metagenomic research. According to Smarr, there are a number of companies that are already looking at marine microorganisms for new drugs, the way they have with soil-based microorganisms. There are also exciting biofuel applications that are being considered, for example the production of hydrogen and ethanol as fuel sources from microbial metabolism.

“Craig [Venter] is particularly interested in hydrogen fuel. One of the things some of these marine microbes do is to produce hydrogen as a waste product,” said Smarr. “So there's this notion of creating synthetic bacteria that have a specific engineering or health application in which you would insert the gene sequence for a certain kind of activity into a worker microbe. That's a whole new industrial revolution.”

Smarr also projects how the technology can be applied directly to other microbial ecosystems. For example, the microorganisms inside of the large intestines were recently shotgun sequenced by Stanford researchers. Soil microorganisms, the source of many drugs, such as penicillin, are another likely target for metagenomics. Even airborne dust particles can be biologically active and are currently being studied in relation to the mold problem caused by the aftermath of Hurricane Katrina.

“What we expect to do is reach out to these other scientific projects that are developing microbial metagenomics and see if they're interested in working with us on this new architecture,” said Smarr. “I think the biological community has been calling for us in the computer science community to help them with their exploding data problem. We want to become more visible in that community simply because we think this answers what they've been searching for.”

The OptIPuter Paradigm

The OptIPuter model is based on the ability of optical networks to move data around at speeds of tens of gigabits per second over dedicated lambdas. Significantly, the increases in optical network bandwidth and storage capacity are outstripping the increases in CPU performance. As a result, “Moore's Law” is not driving information technology the way it used to (ironic when you consider that Gordon Moore, the originator of “Moore's Law,” is now funding this project through his Foundation).

The OptIPuter exploits the enormous bandwidth of fiber optic networks to link distributed computer and storage resources. With the recent expansion of National LambdaRail as the optical backbone for cross-country connectivity, Smarr believes we're entering a critical stage for technological change.

“This is a one-in-twenty-year transition point,” said Smarr, “going back to 1985, when the NSF built the first backbone for the shared Internet. Now National LambdaRail has built the first backbone for the unshared Internet. At present, there are about two dozen state and regional optical networks that are interconnecting to National LambdaRail. The campuses are beginning to put fiber optics into their actual laboratories, and connecting these to the state and regional optical networks which are then connected to National LambdaRail.”

The adoption of computer clusters as a standard tool for high performance scientific computing is another factor that is driving the transition to faster interconnects.

“What's the natural I/O speed for a Linux cluster,” asks Smarr? “The average sized cluster at the the University of California is about 32 nodes. The typical network card is Gigabit Ethernet. So your Linux cluster wants to talk to the rest of the world at 32 Gigabits per second. But even using Internet2, you're lucky to get more than 50 Mbps. You're off by two or three orders of magnitude. We think we're connected to the data, our colleagues and remote instruments by the Internet. In fact, we're incredibly cut off from them. We live in little data islands and compute islands.

“So that was the fundamental insight that led us to work on these optical networks. It wasn't that optical networks were cool and we were looking for something to do with them. It was that the scientific community had decided on Linux clusters as their standard and they're natural need for a wide area network was clearly in the gigabits and tens of gigabits per second range. So we looked around for a technology that could provide this and found that the telecom industry had evolved to the point where the natural data flow on their individual lambdas was 10 gigabits per second.”

The Science Server

As part of the CAMERA project, Calit2 will partner with UCSD's SDSC to develop the science data server complex, which couples the Calit2 and SDSC middleware, compute, and storage capabilities with the TeraGrid computing facility in a Service Oriented Architecture. This will enable computing resources to be applied to a range of tools to tackle the computationally intense questions derived from the metagenomic data collection.

SDSC's Philip Papadopolous is the co-PI for the CAMERA project and is the architect for the data computation and storage server. According to Smarr, this is going to be a very advanced server. Calit2 is working closely with Dell and Sun Microsystems to build one of the most advanced compute and storage systems ever put together. At completion, it should have at least 1000 processors and contain several hundred terabytes of replicated data storage.

“What's most exciting is that at the center core of this compute and storage complex is not a computer it's a 10-gigabit optical fabric,” said Smarr. “Everything is built as peripherals around that. We're working with the vendors to get extremely high I/O storage.”

When the CAMERA science server is developed, it will appear as a network appliance, albeit a very powerful one. This could be one of the most important milestones for the TeraGrid. Here's how Smarr explains it:

“This is the first science data server that has been architected to direct-connect to your local cluster through the National LambdaRail. What we've done with this server is make it the first TeraGrid appliance. In other words, we're linking directly into the TeraGrid lambdas from our science server. So as a user, when you connect to the science server, it now appears to be just an extension of your local cluster. Over the next few years the TeraGrid will expand to tens of thousand of processors, so you'll get orders of magnitude increases in power by plugging into the TeraGrid. It should all appear as if it's in your laboratory. And that's the vision!”

To learn about another OptIPuter application, read Optical Race, the next feature article in this week's issue.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last t Read more…

Supercomputer-Powered Climate Model Makes Startling Sea Level Rise Prediction

April 19, 2021

The climate science community is tasked with striking a difficult balance: inspiring precisely the amount of alarm commensurate to the climate crisis. Make estimates that are too conservative, and the public might not re Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the DL pie a user can finally enj Read more…

AWS Solution Channel

Research computing with RONIN on AWS

To allow more visibility into and management of Amazon Web Services (AWS) resources and expenses and minimize the cloud skills training required to operate these resources, AWS Partner RONIN created the RONIN research computing platform. Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the larg Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computing. Nvidia is pitching the DPU as an active engine... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire