Roughly three months into early operations, the Bridges computing resources being deployed at the Pittsburgh Supercomputing Center is bearing fruit. Designed to accommodate traditional HPC and big data analytics, Bridges had supported 245 projects as of May 26. This ramping up of the NSF-funded ($9.6 M) Bridges project is an important step in delivering practical convergence.
Bridges is being launched in two phases through 2016 with the first phase – comprised of the computational, web server, database and data transfer nodes (details below) – accomplished this spring. When complete, Bridges will provide “1.3 PLOPS, 274 TB RAM, not including database, web server, or other utility nodes, 10PB of shared storage in the Pylon file system and more than 6PB of node-local storage.” Hewlett Packard Enterprise, Intel, and Nvidia are the primary hardware vendors with software developed by PSCC.
An important distinction between Bridges’ computational nodes is the amount of RAM. There are three levels: Regular Shared Memory (RSM), with 128GB each; Large Shared Memory (LSM) nodes, with 3TB each; and Extreme Shared Memory (ESM), with 12TB each. The idea is to provide richly-connected interacting systems that “offer exceptional flexibility for data analytics, simulation, workflows and gateways, leveraging interactivity, parallel computing, Spark and Hadoop.”
Convergence, of course, is getting a lot of attention. Bridges is expected to prove the value. Much of the early work has focused on life sciences research as shown bulleted out here:
- Infectious Disease Tracking. Bridges’ first users were the infectious disease experts of the National Institutes of Health-funded MIDAS network. In a Public Health Hackathon at PSC, twelve teams from across the U.S. and India were tasked with using Bridges to visualize data in a way that transformed understanding of an issue in public health. A team from Carnegie Mellon University’s Department of Statistics took first place with their SPEW VIEW tool, which maps the historical spread of diseases in the U.S.
- Metagenomics. University of Georgia researchers used Bridges to assemble 378 billion base pairs of bacterial DNA from the intestines of healthy patients and those with diabetes. Such “metagenome assembly” doesn’t even try to chemically separate the DNA from many microbial species in a sample. Instead, the scientists sequence short DNA fragments of all the species at once, using computation to sort out the different microbes’ sequences as they assemble them. This massive task leveraged Bridges’ Intel Omni-Path internal connections—the first such installation in the world—linking 20 computational nodes to finish the calculation in 16 hours. The team is now using Bridges to test a new statistical method on the sequence data to identify critical differences in gut microbes associated with diabetes.
- Vaccine Effect Modeling. The PSC Public Health Application Group used Bridges to model the possible benefits of flu vaccine choice in Washington D.C., Allegheny County, Pa., and Salt Lake City. Researchers used “agent-based modeling,” in which every person in an area is represented by a realistic virtual human in the simulation. Initial results suggest that such a policy offering vaccine choice would be more cost-effective than alternatives such as no choice of vaccine, choice offered to children only and choice offered to adults only.
- De Novo Sequencing. A Marshall University (West Virginia) group assembled the genetic sequences of two species, the Narcissus flycatcher and the critically endangered Sumatran rhinoceros. They used a de novo assembly method, which relies upon brute computational force to piece together the DNA fragments’ in order. Using Bridges’ 3 TB large memory nodes, the researchers assembled the 1 billion DNA bases Flycatcher genome in 6.6 hours—almost five times faster than possible with other available resources. The rhino genome assembly (with 3 billion bases) took 11 hours.
XSEDE researchers should take note that “Bridges Regular” allocations are on RSM nodes, while “Bridges Large” allocations are on LSM and ESM nodes. Guidelines for assessing your project’s suitability for being run on Bridges are available online. Not surprisingly researcher should note how the project will benefit from Bridge’s unique flexibility and converged capabilities. Here’s snapshot of the Bridges deployment plan:
Phase 1 (completed, supplies 0.8946 Pf/s and 144 TB RAM):
- 752 RSM nodes: HPE Apollo 2000s, with 2 Intel Xeon E5-2695 v3 CPUs (14 cores per CPU), 128GB RAM and 4TB on-node storage
- 16 RSM GPU nodes: HPE Apollo 2000s, each with 2 NVIDIA K80 GPUs, 2 Intel Xeon E5-2695 v3 CPUs (14 cores per CPU) and 128GB RAM
- 8 LSM nodes: HPE ProLiant DL580s, each with 4 Intel Xeon E5-8860 v3 CPUs (16 cores per CPU) and 3TB RAM
- 2 ESM nodes: HPE Integrity Superdome Xs, each with 16 Intel Xeon E7-8880 v3 CPUs (18 cores per CPU) and 12TB RAM
- Database, web server, data transfer, and login nodes: HPE ProLiant DL360s and HPE ProLiant DL380s, each with 2 Intel Xeon E5-2695 v3 CPUs (14 cores per CPU) and 128GB RAM. Database nodes have SSDs or additional HDDs.
Phase 2 (expected late summer 2016, additional 0.4072 Pf/s and 130 TB RAM):
- 32 additional RSM GPU nodes, server type HPE Apollo 2000. Each node has: 2 Intel Xeon v4 CPUs; 2 NVIDIA next-generation GPUs; and 128 GB RAM
- 34 additional LSM nodes, server type: HPE ProLiant DL580. Each node has 4 Intel Xeon v4 CPUs and 3TB RAM
- 2 additional ESM nodes, server type: HPE Integrity Superdome X. Each node has 16 Intel Xeon v4 CPUs and 12TB RAM