May 4 — Some scientists dream about big data. The dream bridges two divided realms. One realm holds lofty peaks of number-crunching scientific computation. Endless waves of big data analysis line the other realm. A deep chasm separates the two. Discoveries await those who cross these estranged lands.
Unfortunately, data cannot move seamlessly between Hadoop (HDFS) and parallel file systems (PFS). Scientists who want to take advantage of the big data analytics available on Hadoop must copy data from parallel file systems. That can slow workflows to a crawl, especially those with terabytes of data.
Computer Scientists working in Xian-He Sun’s group are bridging the file system gap with a cross-platform Hadoop reader called PortHadoop, short for portable Hadoop. “PortHadoop, the system we developed, moves the data directly from the parallel file system to Hadoop’s memory instead of copying from disk to disk,” said Xian-He Sun, Distinguished Professor of Computer Science at the Illinois Institute of Technology. Sun’s PortHadoop research was funded by the National Science Foundation and the NASA Advanced Information Systems Technology Program (AIST).
The concept of ‘virtual blocks’ helps bridge the two systems by mapping data from parallel file systems directly into Hadoop memory, creating a virtual HDFS environment. These ‘virtual blocks’ reside in the centralized namespace in HDFS NameNode. The HDFS MapReduce application cannot see the ‘virtual blocks’; a map task triggers the MPI file read procedure and fetches the data from the remote PFS before its Mapper function processes its data. In other words, a dexterous slight-of-hand from PortHadoop tricks the HDFS to skip the costly I/O operations and data replications it usually expects.
Sun said he sees PortHadoop as the consequence of the strong desire for scientists to merge high performance computing with cloud computing, which companies such as Facebook and Amazon use to ‘divide and conquer’ data-intensive MapReduce tasks among its sea of servers. “Traditional scientific computing is merging with big data analytics,” Sun said. “It creates a bigger class of scientific computing that is badly needed to solve today’s problems.”
PortHadoop was extended to PortHadoop-R to seamlessly link cross-platform data transfer with data analysis and virtualization. Sun and colleagues developed PortHadoop-R specifically with the needs of NASA’s high-resolution cloud and regional scale modeling applications in mind. High performance computing has served NASA well for their simulations, which crunch data through various climate models. Sun said the data generated from models combined with observational data are unmanageably huge and have to be analyzed and also visualized to more fully understand chaotic phenomena like hurricanes and hail storms in a timely fashion.
PortHadoop faced a major problem in preparation to work with NASA applications. NASA’s production environment doesn’t allow any testing and development on its live data.
PortHadoop developers overcame the problem with the Chameleon cloud testbed system, funded by the National Science Foundation (NSF). Chameleon is a large-scale, reconfigurable environment for cloud computing research co-located at the Texas Advanced Computing Center of the University of Texas at Austin and also at the the Computation Institute of the University of Chicago. Chameleon allows researchers bare-metal access, i.e., allows them to fully reconfigure the environment on its nodes including support for operations such as customizing the operating system kernel and console access.
What’s more, the Chameleon system of ~15,000 cores with Infiniband interconnect and 5 petabytes of storage adeptly blends in a variety of heterogeneous architectures, such as low-power processors, graphical processing units, and field-programmable gate arrays.