NSF has mounted outreach to the HPC community seeking for input from the HPC community on a wide range of issues relating to exascale computing and the objectives of the National Strategic Computing Initiative. This is a critical activity, particularly given that the NSCI Executive Council is mandated to develop a formal implementation plan within 90 days of the Executive Order issued on July 29 of this year.
Excerpted here are portions of the NSF request as well as a few details from the actual RFI.
“The NSF research community is encouraged to respond to the joint Request for Information (RFI) on Science Drivers Requiring Capable Exascale High Performance Computing. This RFI is from NSF, the Department of Energy (DOE), and the National Institutes of Health (NIH) as part of the National Strategic Computing Initiative (NSCI).
The RFI seeks community input on scientific needs for High Performance Computing (HPC) capabilities that extend 100 times beyond today’s performance on scientific applications, in areas such large-scale numerically intense analysis, and deriving fundamental understanding from large-scale data science via image analysis, data assimilation, visualization, and data analytics.
“Please follow the instructions in the RFI to submit a response. The agencies will use the information submitted in response to this RFI at their discretion and will not provide comments to any responder’s submission. The information provided will be analyzed, may appear in reports, and may be shared publicly on agency websites.”
Excerpted here is a portion of the instructions from the RFI:
“Currently, computational modeling, simulation, as well as data assimilation and data analytics are used by an increasing number of researchers to answer more complex multispatial, multiphysics scientific questions with more realism. As the scientific discovery horizon expands and as advances in high performance computing become central to scientific workflows, sustained petascale application performance will be insufficient to meet these needs. In addition, HPC is expanding from traditional numerically oriented computation to also include large-scale analytics (e.g., for Bayesian approaches in model refinement, large-scale image analysis, machine learning, decision support, and quantifying uncertainty in multimodal and multi spatial analyses). Architectures and technologies used for modeling and simulation currently differ from those used for data integration and analytics, but are increasingly converging. The extreme computing ecosystem must therefore accommodate this broad spectrum of growing data science activities.
Information Requested
With respect to your field of expertise in traditional and non-traditional research areas in applications of HPC, agencies request your input/feedback. Your comments can include but are not limited to the following areas of concern:
- The specific scientific and research challenges that would need the projected 100-fold increase in application performance over what is possible today.
- The potential impact of the research to the scientific community, national economy, and society.
- The specific limitations/barriers of existing HPC systems must overcome to perform studies in this area. Your comment can also include the level of performance on current architectures, and the projected increase in performance that is needed from future architectures.
- Any related research areas you foresee that would benefit from this level of augmented computational capability. Identification of any barriers in addition to computational capability that impact the proposed research can also be considered.
- Important computational and technical parameters of the problem as you expect them to be in 10 years (2025). In addition to any specialized or unique computational capabilities that are required and/or need to be scaled up for addressing this scientific problem, e.g., in the areas of computing architectures, systems software and hardware, software applications, algorithm development, communications, and networking.
- Alternative models of deployment and resource accessibility arising out of exascale computing. Improvements in scientific workflow as well as particular requirements that may be needed by specific domains.
- Capabilities needed by the end-to-end system, including data requirements such as data analytics and visualization tools, shared data capabilities, and data services which includes databases, portals and data transfer tools/nodes.
- Foundational issues that need to be addressed such as training, workforce development or collaborative environments.
- Other areas of relevance for the Agencies to consider.”