Keeping Big Data Cool at SDSC

June 29, 2016

June 29 — When most people think of a supercomputer center, they may think of one massive computer performing a single task. Inside the data center at the San Diego Supercomputer Center (SDSC) at the University of California San Diego, however, there are several large supercomputer systems, each performing multiple tasks simultaneously across a wide range of science domains that include genome sequencing to help pave the way to personalized medical treatment, coming up with new drug designs for conditions such as Parkinson’s and Alzheimer’s disease, or creating detailed fluid dynamics simulations for hypersonic aircraft.

Keeping SDSC’s main data center cool enough so that its Comet and Gordon supercomputers, among smaller clusters, don’t overheat is a complex yet mission-critical task, according to Todor Milkov, SDSC’s senior project engineer. A computing architecture such as the one found in Comet, SDSC’s newest supercomputer, requires one megawatt of power to operate the system. Using that much electricity generates a tremendous amount of heat, so SDSC, with the help of outside experts, developed three cooling system prototypes and conducted research to determine the most efficient system.

Each prototype system was designed using vendor-specific technology controlling five air handlers as a baseline to evaluate system performance. One of the prototypes used wireless temperature sensors that read the temperature of the hot and cold aisles every three minutes to increase battery life.

SDSC Datacenter AisleMany data centers use a standard hot aisle/cold aisle design. This design involves lining up server racks in alternating rows, with cold air intakes facing one way and hot air exhausts facing the other. The rows composed of rack fronts are called cold aisles. Typically, cold aisles face air conditioner output ducts. The rows that the heated exhausts pour into are called hot aisles. Typically, hot aisles face air conditioner return ducts.

Containment systems can help isolate hot aisles and cold aisles from each other and prevent hot and cold air from mixing. Such systems started out as using physical barriers that simply separated the hot and cold aisles with vinyl plastic sheeting or Plexiglas covers. Modern containment systems offer plenums and other commercial options that combine containment with variable fan drives (VFDs) to prevent cold air and hot air from mixing.

At SDSC, however, the entire area under the raised floor is used for the supply plenum, and the entire area above the ceiling is for the return plenum. Cold aisles use perforated floor tiles with specifically designed hole sizes to control the air flow volume from the space below the floor, while the hot aisles use ceiling grates that allow heated air to enter the space above the ceiling.

Controlling the air flow from all air handlers discharging into one common plenum presents a difficult problem, especially since these spaces also contain obstructions such as pipes and conduits. Moreover, not all of the compute clusters run at full capacity at any given time, and systems loads also change regularly as research projects start up or stop. These constantly changing factors cause the amount of heat dissipated from the supercomputer systems to fluctuate from minute to minute. The data center cooling system has to quickly adjust to accommodate these fluctuations in temperature.

“We learned a lot during the prototype and research phase of the cooling system design,” said Milkov. “We started by collecting a lot of data on how air flowed through the data center. We found that three minutes between temperature readings was too long an interval to keep the data center within the desired temperature ranges. Because of the longer interval, we used more electricity bringing the data center back to its temperature set points than we needed if we took temperature readings over shorter intervals and could make changes to the cooling system sooner.”

Realizing that a different approach was needed, Milkov put together a vendor evaluation process for an updated data center management system with the objective of reducing energy use while increasing the level of control capability available to the SDSC operations staff.

After extensive research, Milkov selected three companies for prototype installations. At the conclusion of a detailed evaluation, systems integration company Earth Base One (EBO) Corporation and a SNAP PAC-based control system were chosen for providing extensive control capabilities and energy savings.

Milkov and Michael Hyde, EBO’s president, approached the project with the same vision. “Rather than adapting an off-the-shelf data center management system to SDSC, we designed a tailor-built system for SDSC’s unique challenges,” said Hyde.

Opto 22, which develops and manufactures hardware and software products for applications in industrial automation, remote monitoring, and data acquisition, was chosen as the primary controls manufacturer. “The Opto 22 hardware and software not only won the competition for control and energy savings, but was also the least expensive vendor solution,” said Hyde. “The software’s excellent historical data collection and trending abilities allowed SDSC engineers to continue improving the system based on real data.”

“We appreciated the outstanding technical support SDSC received from Opto 22 during our design and prototype phase,” said Milkov. “When you’re trying to protect millions of dollars’ worth of research, you need a control system you can rely on.”

The full case study is available here.

About SDSC

As an Organized Research Unit of UC San Diego, SDSC is considered a leader in data-intensive computing and cyberinfrastructure, providing resources, services, and expertise to the national research community, including industry and academia. Cyberinfrastructure refers to an accessible, integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. SDSC’s Comet joins the Center’s data-intensive Gordon cluster, and are both part of the National Science Foundation’s XSEDE (eXtreme Science and Engineering Discovery Environment) program, the most advanced collection of integrated digital resources and services in the world.

About Opto 22

Opto 22 develops and manufactures hardware and software products for applications in industrial automation, remote monitoring, and data acquisition. Using standard, commercially available Internet, networking, and computer technologies, Opto 22’s input/output and control systems allow customers to monitor, control, and acquire data from all of the mechanical, electrical, and electronic assets that are key to their business operations. More information is at www.opto22.com.


Source: SDSC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Graphcore Introduces Next-Gen Intelligence Processing Unit for AI Workloads

July 15, 2020

British hardware designer Graphcore, which emerged from stealth in 2016 to launch its first-generation Intelligence Processing Unit (IPU), has announced its next-generation IPU platform: the IPU-Machine M2000. With the n Read more…

By Oliver Peckham

heFFTe: Scaling FFT for Exascale

July 15, 2020

Exascale computing aspires to provide breakthrough solutions addressing today’s most critical challenges in scientific discovery, energy assurance, economic competitiveness, and national security. This has been the mai Read more…

By Jack Dongarra and Stanimire Tomov

There’s No Storage Like ATGC: Breakthrough Helps to Store ‘The Wizard of Oz’ in DNA

July 15, 2020

Even as storage density reaches new heights, many researchers have their eyes set on a paradigm shift in high-density information storage: storing data in the four nucleotides (A, T, G and C) that constitute DNA, a metho Read more…

By Oliver Peckham

Get a Grip: Intel Neuromorphic Chip Used to Give Robotics Arm a Sense of Touch

July 15, 2020

Moving neuromorphic technology from the laboratory into practice has proven slow-going. This week, National University of Singapore researchers moved the needle forward demonstrating an event-driven, visual-tactile perce Read more…

By John Russell

What’s New in HPC Research: Volcanoes, Mobile Games, Proteins & More

July 14, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

AWS Solution Channel

INEOS TEAM UK Accelerates Boat Design for America’s Cup Using HPC on AWS

The America’s Cup Dream

The 36th America’s Cup race will be decided in Auckland, New Zealand in 2021. Like all the teams, INEOS TEAM UK will compete in a boat whose design will have followed guidelines set by race organizers to ensure the crew’s sailing skills are fully tested. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and increase the vehicle’s speed and efficiency. These fluid dyn Read more…

By Oliver Peckham

Graphcore Introduces Next-Gen Intelligence Processing Unit for AI Workloads

July 15, 2020

British hardware designer Graphcore, which emerged from stealth in 2016 to launch its first-generation Intelligence Processing Unit (IPU), has announced its nex Read more…

By Oliver Peckham

heFFTe: Scaling FFT for Exascale

July 15, 2020

Exascale computing aspires to provide breakthrough solutions addressing today’s most critical challenges in scientific discovery, energy assurance, economic c Read more…

By Jack Dongarra and Stanimire Tomov

Get a Grip: Intel Neuromorphic Chip Used to Give Robotics Arm a Sense of Touch

July 15, 2020

Moving neuromorphic technology from the laboratory into practice has proven slow-going. This week, National University of Singapore researchers moved the needle Read more…

By John Russell

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

John Martinis Reportedly Leaves Google Quantum Effort

April 21, 2020

John Martinis, who led Google’s quantum computing effort since establishing its quantum hardware group in 2014, has left Google after being moved into an advi Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This