Nine UW Projects Awarded Summer Use of Supercomputer in Cheyenne

June 29, 2016

June 29 — Nine projects, a number of which have applications to atmospheric science issues, were recently chosen to receive computational time and storage space on the supercomputer in Cheyenne.

University of Wyoming faculty members and, in one case, a graduate student, will head projects that will use the NCAR-Wyoming Supercomputing Center (NWSC). Each project was critically reviewed by an external panel of experts and evaluated on the experimental design, computational effectiveness, efficiency of resource use, and broader impacts such as how the project involves both UW and NCAR researchers; strengthens UW’s research capacity; enhances UW’s computational programs; or involves research in a new or emerging field.

“The Wyoming-NCAR Allocations Panel evaluated a record-high nine requests,” says Bryan Shader, UW’s special assistant to the vice president for research and economic development, and professor of mathematics. “The projects were granted allocations totaling 42.6 million core hours of computing time on Yellowstone and will enable some incredible science on issues of importance to Wyoming, the U.S. and the world. Given that Wyoming’s share of the NWSC is 75 million core hours, these allocations and the more than 40 million (core hours) allocated in February show more than full utilization of the resource.”

Twenty-five UW-led projects used Yellowstone (the nickname for the supercomputer) in 2015, and this places Wyoming as the top university in total allocations, users and usage among the more than 150 universities that use the NWSC.

Since the supercomputer came on line during October 2012, allocations have been made to 65 UW research projects, including these latest nine, which commence July 1.

The newest projects, with a brief description and principal investigators, are as follows:

Maohang Fan, a UW professor of petroleum engineering, heads a project, titled, “Application of Density Functional Theory in CO2 Capture and Conversion Research.” The project, partially funded by the Department of Energy, seeks to design promising catalysts for capturing and converting carbon dioxide. Collaborators include Wenyong Wang, a UW professor of physics and astronomy; Ted Russell, the Howard T. Tellepsen Chair and Regents’ Professor in the School of Civil and Environmental Engineering at Georgia Tech; and Hongtao Yu, professor and chair in the Department of Chemistry and Biochemistry at Jackson State University.

Bart Geerts, a UW professor of atmospheric science, heads the project, titled “Regional Climate Change Assessment in the Interior Western USA Using a Dynamical Downscaling Method with CCSM Bias Corrections: Focus on Precipitation and Snowpack.” The project focuses on better understanding how the distribution of precipitation, snowpack and stream-flow in the headwaters region of Wyoming are expected to change over the next 30-40 years. A better understanding of long-term changes in Wyoming  watersheds is of great interest to the state’s water obligations and water development opportunities, as well as to agricultural and forestry interests in the state, and to downstream stakeholders.

Collaborators include UW postdoctoral student Yonggang Wang, UW doctoral student Xiaoqing Jing and Changhai Liu, a scientist from NCAR’s Research Applications Laboratory. The project is partially supported by the Wyoming Water Development Commission.

Zachary Lebo, a UW assistant professor of atmospheric science, leads a project, titled “Investigating Forecast Performance in Wyoming Using a High-Resolution Numerical Weather Prediction Model.” Lebo is interested in better understanding factors that result in forecast errors for weather across Wyoming and, in using this understanding, to create better prediction tools for ground blizzards. His project will lay the groundwork for a real-time Wyoming forecasting operation, and aspects of the project and modeling will be incorporated into UW’s “Introduction to Atmospheric Science” undergraduate course.

Xiahong Liu, a UW professor of atmospheric science and the Wyoming Excellence Chair in Climate, will lead two projects. The first, titled “Quantifying the Impacts of Absorbing Aerosols on Rocky Mountain Regional Climate,” seeks to better understand the impacts on regional climate from the presence of light-absorbing aerosols, such as dust and particles from fires or pollution on top of snow.

Collaborators include Louisa Emmons, Simone Tilmes, Andrew Gettelman and Mary Barth from the National Center for Atmospheric Research (NCAR); and Chun Zhao, Yun Qian and Ruby Leun from the Pacific Northwest National Laboratory. The project is partially funded by the College of Engineering and Applied Science’s Tier-1 Engineering Initiative.

The second project, titled, “Modeling the Impacts of Biomass Burning Aerosols on Marine Stratocumulus Clouds Using a Hierarchical Modeling System,” will study the effects of particulates from wildfires on cloud formation.

Collaborators include NCAR’s Emmons, Tilmes, Barth and Gettelman; Yuhang Wang, professor in the Department of Earth and Atmospheric Sciences at the Georgia Institute of Technology; and Yun Qian, of Pacific Northwest National Laboratory. The project is partially supported by a National Science Foundation (NSF)/Department of Energy grant.

Subhashis Mallick, a UW geology and geophysics professor, will lead the project, titled “Anistropic Reverse-Time Mitigation and Full-Wave Form Inversion of Single and Multicomponent Seismic Data and Joint Inversion Single Component Seismic and Electromagnetic Data.” The project will develop the key analytic tools needed to use seismic studies to determine the storage capacity, optimum resource recovery, and other qualities of subsurface reservoirs as carbon dioxide storage and sequestration sites.

Scott Miller, a UW professor in the Department of Ecosystem Science and Management, heads a project, titled “Integrating Dynamically Downscaled Climate Data with Hydrologic Models.” The project will couple atmospheric and hydrologic models to study the impacts on water resources and flow regimes in the Crow Creek watershed in southeast Wyoming under different climate scenarios. This is one of the main watersheds providing water to Cheyenne. The project is supported by an NSF grant, called Water in a Changing West.

Fred Ogden, a UW professor in the Department of Civil and Architectural Engineering, leads a project, titled “ADHydro Model Development,” that will further develop and test a large-scale hydrological model that incorporates the groundwater-surface water interactions that are of importance in management of reservoirs, diversions, etc. Both National Oceanic and Atmospheric Administration and the U.S. National Water Center are considering incorporating Ogden’s ADHydro model into their models.

Wei Wang, a UW graduate student majoring in geology and geophysics, will undertake a project, titled “Near-Surface Adjoint Tomography Based on the Discontinuous Galerkin Method.” The goal of his study is to image and study a portion of the Earth’s critical zone, or the portion of the Earth between bedrock and treetops. In particular, Wang will use near-surface seismic data to understand how rocks and soil weather. Research will focus on a site near Blair-Wallis in southeastern Wyoming. The project is partially supported by the NSF grant Water in a Changing West.

By the numbers

The most recent recommended allocations total 42.6 million core hours, 270 terabytes of archival storage, and 47,000 hours on data analysis and visualization systems, Shader says. To provide some perspective on what these numbers mean, here are some useful comparisons. In simplest terms, Yellowstone can be thought of as 72,576 personal computers that are cleverly interconnected to perform as one computer. The computational time allocated is equivalent to the use of the entire supercomputer for 24.5 days­, 24 hours a day. The 270 terabytes of storage would be enough to store the entire printed collection of the U.S. Library of Congress more than 20 times.

Yellowstone consists of about 70,000 processors, also known as cores. An allocation of one core hour allows a project to run one of these processors for one hour, or 1,000 of these for 1/1,000th of an hour.

The successor to the Yellowstone cluster, to be called Cheyenne, is scheduled to come online in early 2017. It is anticipated that Yellowstone will be retired in late 2017. In fall 2017, Wyoming researchers will have an opportunity to apply for early opportunities to use Cheyenne for ambitious projects that utilize Cheyenne’s increased capabilities.

In late 2016, Wyoming researchers will be able to apply for regular allocations on Cheyenne. Wyoming’s share of Cheyenne will be around 160 million core hours per year. The new high-performance computer will be a 5.34-petaflop system, meaning it can carry out 5.34 quadrillion calculations per second. It will be capable of more than 2.5 times the amount of scientific computing performed by Yellowstone.

The NWSC is the result of a partnership among the University Corporation for Atmospheric Research (UCAR), the operating entity for NCAR; UW; the state of Wyoming; Cheyenne LEADS; the Wyoming Business Council; and Black Hills Energy. The NWSC is operated by NCAR under sponsorship of the NSF.

The NWSC contains one of the world’s most powerful supercomputers dedicated to improving scientific understanding of climate change, severe weather, air quality and other vital atmospheric science and geo-science topics. The center also houses a premier data storage and archival facility that holds historical climate records and other information.


Source: University of Wyoming

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire