Jan. 16, 2019 — The Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) has issued a call for proposals.
ALCC Mission
The mission of the ASCR Leadership Computing Challenge (ALCC) is to provide an allocation program for projects of interest to the Department of Energy (DOE) with an emphasis on high-risk, high-payoff simulations in areas directly related to the DOE mission and for broadening the community of researchers capable of using leadership computing resources.
The DOE mission is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions. See https://www.energy.gov/mission for more details on the Department’s mission objectives
How to Apply
The call for proposals for the 2019-2020 ALCC allocation process is now open. The call will close at 11:59 p.m. (Eastern Time) on Wednesday, Feb 13, 2019. ALCC Applicants submit proposals through an online system at https://apps.orau.gov/ALCC/Account/Login. Application requirements are described below. A PDF of these instructions is available at: PDF of Application Instructions (189KB)
For any questions please contact the ALCC program manager, Betsy Riley, or the assistant program manager, Christopher Miller, via [email protected].
Allocation Performance Period
The 2019-2020 ALCC performance period is July 1, 2019 to June 30, 2020.
System Descriptions
Allocations are provided in units of node-hours *10^6. Since the node-hour is no longer a standard unit across architectures, we have provided descriptions below for what is considered to be a node for this allocation period. There is also a table below giving an indication of the relative processing power of the “nodes”. The following high performance computing resources are available during the ALCC 2019-2020 performance period:
- OLCF Summit: OLCF will have 6.0M node-hours available on Summit, an IBM Power System AC922 system. Each of the 4,608 nodes contains two 22-core IBM POWER9 CPU processors and six NVIDIA Tesla V100 graphics processing unit accelerators (GPUs). The IBM POWER9 processor supports hardware threads. Each POWER9 processor is connected via dual NVLINK bricks, each capable of a 25 GB/s transfer rate in each direction. The memory per node is 512 GB of DDR4 combined with 96 GB of High Bandwidth Memory (HBM2) for use by the accelerators. The system uses a dual-rail Mellanox EDR 100 Gb/s Infiniband interconnect.
For applications requesting time on Summit, projects capable of using GPUs or developing GPU capabilities are strongly encouraged.
For more details about Summit see:
https://www.olcf.ornl.gov/for-users/system-user-guides/summit
https://www.olcf.ornl.gov/for-users/system-user-guides/summit/system-overview/
- ALCF Theta: ALCF will have 5.9M node-hours available on Theta, a CrayXC40 system with Intel Xeon Phi CPU (a.k.a. Knights Landing or KNL) processors. Each of the 4,392 nodes has an Intel Xeon Phi Processor 7250 (KNL) with 64 cores. Each of the 64 cores supports four hardware threads for a total 256 threads per node. The memory per node is 192 GB of DDR4 combined with 16 GB of high-bandwidth multi-channel DRAM (MCDRAM). The system uses the Cray Aries high speed “dragonfly” topology interconnect.
For more details about Theta see:
https://www.alcf.anl.gov/user-guides/computational-systems#theta-(xc40)
- NERSC Cori: NERSC will have 4.5M node-hours available on Cori’s KNL partition, a CrayXC40 system with Intel Xeon Phi CPU (a.k.a. Knights Landing or KNL) processors. Each of the 9,668 nodes has an Intel Xeon Phi Processor 7250 (KNL) with 68 cores. Each of the 68 cores supports four hardware threads each for a total of 272 threads per node. The memory per node is 96 GB of DDR4 combined with 16 GB high-bandwidth MCDRAM. The system uses the Cray Aries high speed “dragonfly” topology interconnect.
For more details on the Cori KNL nodes see:
http://www.nersc.gov/users/computational-systems/cori/configuration/cori-intel-xeon-phi-nodes/
Please note that due to fundamental differences between the system architectures, the FLOPs achievable per node-hour are NOT equivalent across the three systems. The following table, which is based on the Top500 ranking of the systems, provides a rough comparison of the computational power per node:
Proposers should also use the respective webpages to familiarize themselves with the descriptions of the data management resources available to users of the ASCR Facilities.
Oak Ridge Leadership Computing Facility (OLCF)
https://www.olcf.ornl.gov/for-users/system-user-guides/summit/file-systems/
Argonne Leadership Computing Facility (ALCF)
https://www.alcf.anl.gov/user-guides/xc40-file-systems
https://ww.lcf.anl.gv/user-guides/data-storage-file-systems
National Energy Research Scientific Computing Center (NERSC) http://www.nersc.gov/users/computational-systems/cori/file-storage-and-i-o/
Eligibility
- The proposed research should be in areas related to the DOE mission
- The proposed research results must be open and cannot contain proprietary information unless the project meets the criteria for the Industrial Partnership User Agreement. (Interested parties from industry should contact the “Contacts for Industry” here for more information).
Review Process
ALCC proposals undergo scientific merit reviews through a peer review process. The proposals are evaluated against the following criteria, which are listed in descending order of importance as codified in theCode of Federal Regulations (10 CFR 605.10)
- Scientific and/or technical merit of the project (for ALCC, this includes an evaluation of the project’s relevance and importance to the DOE mission)
- Appropriateness of the proposed method or approach
- Competency of applicant’s personnel and adequacy of proposed resources
- Reasonableness and appropriateness of the proposed allocation request
For full information on the application process, click here.
Source: U.S. Department of Energy