Sometimes lost in the discussion around big data is the fact that big science has long generated huge data sets. “In fact, large-scale simulations that run on leadership-class supercomputers work at such high speeds and resolution that they generate unprecedented amounts of data. The size of these datasets—ranging from a few gigabytes to hundreds of terabytes—makes managing and analyzing the resulting information a challenge in its own right,” notes an article posted on Oak Ridge National Laboratory site yesterday.
Now, a group at ORNL’s Oak Ridge Leadership Computing Facility has developed an updated version of R – programming with big data in R (pbdR) – that is helping OLCF users cut large data sets down to size.
According to the article, OLCF’s Advanced Data and Workflow (ADW) Group and the Computer Science and Mathematics Division’s Scientific Data Group (SDG) have worked together “to scale R—the most commonly used data analytics software in academia and a rising programming language in high-performance computing (HPC)—to the OLCF’s Rhea, Eos, and Titan systems. Though R users have typically used the software to analyze smaller datasets on regular workstations, this development will allow them to deploy the tool for big data analysis that scales to thousands of processors and speeds up analysis by at least an order of magnitude.”
Drew Schmidt, a graduate research assistant at the University of Tennessee, Knoxville, and codeveloper of the Programming with Big Data in R (pbdR) package, said, “Simulation researchers are generating all of this data, but they’re not doing much analysis with it. With our infrastructure, there’s no reason not to.”
The open source pbdR packages emerged out of two data analysis projects led by George Ostrouchov, a senior scientist in SDG. The two projects—a DOE-funded project for the analysis of extreme-scale climate data at ORNL, where Wei-Chen Chen was a postdoc, and the National Science Foundation (NSF)–funded project for a Remote Data Analysis and Visualization Center at the National Institute for Computational Sciences, where Pragneshkumar Patel and Schmidt were computational scientists—both used R. The current pbdR project is funded by the NSF Division of Mathematical Sciences. The original pbdR team included Chen, Ostrouchov, Patel, and Schmidt.
At the 2016 OLCF User Meeting (June 23-25), Ostrouchov and Matheson presented a tutorial on how to use pbdR on OLCF systems, on which R is now available as a module for users. Sreenivas Rangan Sukumar, the ADW group leader, also presented a use case at the user meeting that exemplified the pbdR project’s significance to the emerging area of high-performance data analytics. The group was able to take a problem (conducting a principal-components analysis on a huge matrix) that takes several hours on popular cloud computing frameworks, such as Apache Spark, and analyze it in less than a minute using R on OLCF high-performance hardware.
“Our tests are revealing the trade-offs and benefits between performance and productivity for data analytics,” Sukumar said. “If a user wants to explore 10 different analysis methods but has limited access to HPC resources, Apache Spark–like frameworks are great. However, for situations where one needs interactive near-real-time analysis, the pbdR approach is much better.”
Source: Oak Ridge National Laboratory