TGAC Installs largest SGI UV 300 Supercomputer for Life Sciences

By John Russell

May 11, 2016

Two weeks ago, The Genome Analysis Centre (TGAC) based in the U.K. turned on the first of two new SGI UV300 computers. Next week, or thereabouts, TGAC will bring a second identical system online. Combined with its existing SGI UV2000, TGAC will have the largest SGI system dedicated to life sciences in the world. The upgrade will allow TGAC to significantly shorten the time required to assemble wheat genomes, a core activity in TGAC efforts to enhance worldwide food security.

The upgrade is part of TGAC’s central mission to use advanced HPC and bioinformatics to seek solutions to the world food productivity challenge. TGAC’s specialty is wheat, which is a major base component of the world’s food supply.

It turns out the wheat genome is notoriously difficult to work with. For starters, it contains roughly 17 gigabases (nucleotide pairs), which is five times the size of the human genome. The wheat genome contains 80 percent ‘repeats’ – sections of DNA sequence that are especially difficult to assemble and confound most sequencing algorithms. Lastly, the wheat genome is hexaploid, meaning it has six sets of chromosomes versus two for the human genome – the thinking here is that modern wheat is a kind combination of three ancestral strains.

All boiled down, wheat is tough to deal with from a sequence assembly perspective, and when TGAC help produced the first draft of the complete wheat genome a year or so ago, it was heralded as a major achievement.

TGAC SGI wheat genome graphic 385xUnfortunately the world’s wheat yields have been declining for a variety of reasons. “Our work – through genome assembly, alignment, and variant calling – is to help work out what the [gene] functions are and to get that data back to the research community and breeders who hopefully can breed new types of wheat that are less susceptible to heat and pathogens, etc.,” said Tim Stitt, Head of Scientific Computing at TGAC.

Not surprisingly high performance computing is critical to TGAC’s effort. “Because of the work that we do and its size and scale, we need to cutting edge technologies to be able to handle the work quickly and effectively.” TGAC was, for example, one of the first major genomics centers to deploy the specialized FPGA-based DRAGEN processor to accelerate alignment and variant calling. “Alignment used to take 3-4 day, now it takes 3-4 hours using the FPGA,” said Stitt.

By comparison, genome assembly is more difficult than alignment, especially so called de novo sequencing which doesn’t use a reference genome as a guide. On TGAC’s earlier systems, it was taking four weeks to assemble a wheat genome. The new UV300s, which replace a pair of aging UV100s, have been especially configured for assembly work (memory, processor speed) and are expected to shorten the time required to assemble wheat genomes to less than three weeks.

Here’s a brief overview of the new machines:

  • This new TGAC platform comprises two SGI UV 300 systems totaling 24 terabytes (TB) of shared-memory, 512 Intel Xeon Processor E7 v3 cores and 64TB of Intel P3700 SSDs with NVMe storage technology. Each SGI UV 300 flash memory solution features 12TB of shared memory with 7th generation SGI NUMAlink ASIC technology, scaling up to 64 TB of global addressable memory as a single system.
  • Paired with flash storage, the combined 24TB SGI UV 300 supercomputers can increase processing speeds of heavy workloads in scientific research by 80 percent. This combination of leading-edge technology allows TGAC researchers to benefit from the faster processing capabilities of the SGI UV 300, providing an extraordinarily powerful platform for genomics analysis.

“Having a shared memory server is an important element,” said Stitt. “A single assembly typically requires 4-6TB of RAM. What’s somewhat unique about this platform compared to the previous ones are the 32 TB of solid state drives (per machine) with NVME. That should give us a significant boost on the IO side. Our wheat files can be close to 1TB in size and must be read into memory.”

SGI UV300
SGI UV300

Besides memory enhancement, the jump to E7 v3 processors was a major step up from the Sandy Bridge processors in the UV100. “We’ve essentially skipped a generation – Ivy Bridge – and gone straight to Haswell. That alone would give us a boost in performance. Really it’s the whole package – memory, processors, storage, etc. The UV100s were purchased five or six years ago and that’s a lifetime in HPC.”

TGAC runs multiple jobs on SGI computers and is in the process of switching schedulers. Altair’s PBS is used on the old system, but Stitt is transitioning to Slurm, which is being used on the new UV300 that’s running. They both work well, said Stitt. “We’ve evaluated Slurm over past 6 – 8 months. It worked very well for what we want to do and it’s free. Really it was a cost decision and may free up revenue we’d normally spend on licenses and allow us to put it towards more hardware.”

Stitt notes the new UV300 solutions are considerably more dense that the older machines, “The UV300 comes in 5U rack space; the UV100 with effectively less memory, fewer cores, probably took over a rack of space.” He’s expecting greater energy efficiency as a result.

Researchers are still in the early stages of using the first UV300, said Stitt, who like HPC managers throughout life sciences must serve a diverse researcher constituency, many of whom aren’t comfortable with command line tools. “You need to know a little but about Linux to log into our HPC systems. A lot of our users, particularly our external users, don’t have backgrounds in programming and Linux and command lines and things,” Stitt said.

To make things easer, TGAC also allows users to use tools like Galaxy as a front end to the systems. “These researchers can access our systems through the Galaxy interface where they can set up workflows and Galaxy will launch them on the back end. Actually, we have a whole research team that works on data integration and the equivalent of scientific portals to help here.”

TACC_logo-240x62.pngAlong the line of reaching the maximum number of researchers, TGAC is in the midst of a project to forge closer ties with iPlant, a U.S.-based effort also tackling worldwide food production and agriculture. A few key iPlant organization and mission points are bulleted here:

  • Established by the U.S. National Science Foundation (NSF) in 2008 to develop cyberinfrastructure for life sciences research and democratize access to U.S. supercomputing capabilities.
  • A virtual organization lead by The University of Arizona, Texas Advanced Computing Center, Cold Spring Harbor Laboratory, and University of North Carolina at Wilmington.
  • Developing the national cyberinfrastructure for data-intensive biology driven by high-throughput sequencing, phenotypic and environmental datasets.
  • Providing powerful extensible platforms for data storage, bioinformatics, image analyses, cloud services, APIs, and more.
  • Making broadly applicable cyberinfrastructure resources available across the life science disciplines (e.g., plants, animals, and microbes).

“We won an award recently to build an iPlant U.K. here at TGAC. We’re working with iPlant folks to put together an iPlant infrastructure and at some point hopefully federate the two sites together. It’s a big project that we are halfway through,” said Stitt. The goal is to facilitate and speed dissemination of TGAC result by having an open system for sharing data.

Stitt is also working to make better use of the DRAGEN FPGA system, “It’s working brilliantly and we certainly haven’t exceeded our limits on it. We are expecting to generate more data coming from new lines of wheat and our interest lies is streamlining the two technologies – the DRAGEN chip with the SGI system.” That’s part of TGAC’s IO challenge generally. “We have raw data coming off the sequencing machines that we need to get onto the SGI platform, particularly the SSD drives. That data is used to generate an assembly, which we’ll store on our file system, and we need to pipe that into our DRAGEN FPGA [which sits on another system.]”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire