In the third of a four-part installation, Jay Etchings, director of operations for research computing and senior HPC architect at Arizona State University, looks at why the traditional HPC community hasn’t moved more quickly to adopt big data alternatives to bare metal clusters+MPI+InfiniBand – and why it should.
Combining Open Big Data Architectures and HPC
Open Big Data Architectures (OBDA) can act as a foundational component for a university’s big data research platform. OBDA methodologies to which many research colleagues in the private and public sectors are only pensively adopting offer equal amounts of puzzlement and prosperity. Combined with a slow response in curriculum we find the public space outpaced by the private. This is nothing new and not the sounding of alarm, however the opportunity for the public research community to embrace ‘open big data’ bridges that gap.
Continued expansion of Internet2 connectivity addresses deficiencies in available bandwidth supporting thoughtful cooperation and collaboration for inter-intra university research sciences. With continued support and emphasis on personalized, precision medicine producing renewed funding streams and of course access to diverse data types and cohorts, the opportunities in this new paradigm will continue to emerge.
Hadoop and the MapReduce programming paradigm have a substantial base in the bioinformatics community. The field of next-generation sequencing analysis has added substantially to this adoption. The cost-effectiveness of Hadoop-based analysis on commodity Linux clusters (still X86), and cloud integration points thanks to public cloud vendors offering Hadoop with easy-to-consume-and-use MapReduce methodology in parallelization for many popular data analysis algorithms has been a good advocate. Distributed ‘server-based’ file systems rather than expensive, traditional parallel storage adjust the economics to become more palatable to research organizations bound by the budgetary whims of state and local government. Seems like a no brainer for higher education.
Why then has the traditional HPC community not fully embraced OBDA? There are all sorts of nuanced versions of answers to this query, but they all amount to the same thing. Conventional high performance computing has not been tasked with finding new ways to solve computational problems beyond the traditional linear scale model that nodes + cores + I/O = performance has always delivered. With the breakdown of Moore’s law on the horizon and Dennard scaling delivering a decisive cut, an alternate interpretation of Amdahl’s rule can be applied. In brief, Amdahl’s law is used to find the maximum expected improvement to an overall system when only part of the system is improved. Amdahl’s rule is often sighted in parallel computing discussions to predict the theoretical maximum speed up using multiple processors. Applying this same formula to workloads within the cloud-oriented architecture extends the speedup variable through decoupling workloads from physical processors.
Components native to the Hadoop ecosystem and of course the base file system already perform workload parallelization without the need for message passing via a Message Passing Interface (MPI). MPI has been the standardized and portable message-passing system created by researchers from academia integrating with a wide variety of parallel computers. At times the parallelization process for a legacy application using MPI/Open MPI, C/C++, PVM, and/or FORTRAN can be daunting to say the least. The gains in performance with C++ versus Java typically do not assess the “wall clock time” absorbed in developing/ debugging code. For example, we have an example where the parallelization of datasets cost a hidden 38 hours of wall clock time to chunk the source into jobs for traditional HPC to process. Hadoop offers this parallelization both in the workload and data storage layers.
Opponents often find the batch nature of MapReduce to be suboptimal as jobs could take two hours to spool as Hadoop has some operations related to node management and slot availability that are run at job launch. This is tied to the size of the cluster and available slot count. However the Apache Spark framework addresses this deficiency by addressing targeted workloads with in-memory processing while still providing the distributed fault tolerance of the Hadoop Distributed File System (HDFS). Apache Spark is part of the wider big data ecosystem with a huge advantage as traction amongst Internet scale companies has drawn hundreds of developers/contributors to its community. The popular scalable machine learning library ‘Mahout’ has moved to Spark as it said “Goodbye to MapReduce” on April 25, 2014. With strong language bindings in Scala, Java, Python and R (SparkR), the barrier to adoption for research computing is increasingly low. Time to produce usable data/results though minimizing wall clock time creating boilerplate code allows for more time solving complex science problems.
Performance will only improve for Spark and its integration with the surrounding ecosystem of tools which addresses the greatest challenge in research compute. For the remaining pillars we employ Mesos for flexible grid computing, Kafka for a high throughput / fault-tolerant / distributed message queuing, Zookeeper for centralized services maintaining configuration information, naming, distributed synchronization, and group services, and of course HDFS for our distributed fault-tolerant file system. It is understood that specialized use cases where traditional bare metal HPC, MPI and InfiniBand still exist. For example, high performance computing problems like the US Department of Energy’s massive nuclear decay simulations that have been optimized to run on traditional supercomputers. These and other specialized HPC problems will persist into the future, but be complimented by OBDA.
Parts one and two of this series are available here and here. Stay tuned for part four in the coming weeks.
Director of Operations, Research Computing, and Senior HPC Architect at Arizona State University, Jay Etchings is a well-known industry professional with 20 years of progressively versatile, cross-platform experience in management of open systems architecture. With the bulk of a 10 year technical consulting career spent in gaming and connected lotteries, data relationship analysis has been a longtime passion for Etchings. He is well versed in all phases of cutting edge analytics and research computing. A former recovery audit contractor for the centers for Medicaid/ Medicare (CMS-RAC) positions him in alignment with the new ‘precision medicine’ healthcare field that is currently emerging.
Additional contribution provided by…
Dr. Kenneth Buetow also contributed to this article series. Buetow serves as director of Computational Sciences and Informatics program for Complex Adaptive Systems at Arizona State University (CAS@ASU) and is a professor in the School of Life Sciences in ASU’s College of Liberal Arts and Sciences. CAS@ASU is creating a Next Generation Cyber Capability (NGCC) to address the challenges and opportunities afforded by “Big Data” and the emergence of 4th Paradigm Data Science. This capability brings state-of-the-art computational approaches to CAS@ASU’s trans-disciplinary, use-inspired research efforts.