In a recent blog entry, Mike Boros, Hadoop Product Marketing Manager at Cray, Inc., writes about the company’s positioning of Hadoop for scientific big data. Like the old adage, “when the only tool you have is a hammer, every problem begins to resemble a nail,” Boros suggests that the Law of the Instrument may be true for technical computing professionals assessing the most well-known of the big data tools: Hadoop.
Boros writes: “When used inappropriately, and incorporating technologies not suited for scientific Big Data, using Hadoop may indeed feel like wielding a cumbersome hammer. But when used appropriately, and with a technology stack that’s specifically suited to the realities of scientific Big Data, Hadoop can feel like a Swiss Army knife — a multipurpose tool capable of doing a wide range of things.”
“Of course, whether Hadoop feels like a Swiss Army knife or not depends not only on the experience level of the user, but also on whether it’s designed and implemented for scientific Big Data. And scientific Big Data is different from the Big Data much of the rest of the world is dealing with,” he adds.
It all boils down to suitability for the job at hand. Hadoop was developed to handle bite-sized pieces of data that are aggregated into larger files and then analyzed in their entirety. It’s an ideal approach for assessing user sentiment in social media feeds and it can also be applied to big data science applications that incorporate a large number of sensor data. But when it comes to analyzing a seismic or weather model file as part of a big data application, Hadoop’s usefulness starts to break down.
“Hadoop can indeed feel like a blunt and inappropriate hammer when this inefficient process of analyzing unnecessary blocks of data is repeated dozens or hundreds of times a day,” writes Boros. “This is a case where random access to files is necessitated, and frankly, that’s not in HDFS’s (the Hadoop Distributed File System) wheelhouse.”
But because of its Swiss Army knife-like design, Hadoop lends itself to some interesting workarounds and modifications. By leveraging MapReduce and wrapping HDFS with a POSIX compliant file system such as Lustre, users can simply skip the extraneous or uninteresting data blocks in order to devote more resources towards analyzing large hierarchical files. To those who would point out that this isn’t really Hadoop but MapReduce on a POSIX-compliant file system, Boros explains that the approach is done in way that doesn’t affect Hadoop’s other operations, meaning the MapReduce ecosystem is still in tact. “In other words,” writes Boros, “the storage has to be presented to MapReduce and its constituents as if it was HDFS, even though another file system lies underneath it. And yes, that’s one of the ways we are looking at for designing our Hadoop solutions here at Cray.”
Boros understands that standard Hadoop implementations aren’t the best fit for traditional HPC applications, but he believes that the tool has the inherent flexibility to bridge the big data / big compute divide. While scientific environments work well with standard file systems built around Posix file access, Boros suggests that mounting the POSIX compliant volume with MapReduce would be an ideal situation. There are still some kinks to iron out, but Cray is working on these, and according to Boros, “you won’t be obligated to take on a 200 percent availability overhead tax as the file system you’re using will likely require 15-20 percent RAID parity which most organizations find appropriate for even their most mission-critical data.”
The message here is that one environment can excel at both analytical and compute-intensive workloads. For Cray, the flexibility of the Hadoop stack and supporting infrastructure plays a major role in this vision. The company is taking steps to modify Hadoop to be more efficient and perhaps even more cost-effective than using an ad-hoc distributed infrastructure.
As for why organizations would want a system to do double-duty in this way, Boros emphasizes the benefits of a flexible infrastructure and workflow. Being able to use the same infrastructure for multiple job types will allow users to focus on different parts of a project at different points in time. They also have the option to work on grand challenge problems that don’t easily fall into standard application buckets. And depending on how the approach is implemented, they will be able to manage disparate workloads and workflows in parallel by employing resource management and job scheduling techniques. Such a machine will lend itself to sharing across departments, helping to achieve an equitable division of budgetary and staffing resources.
Boros expects to get some pushback for the Swiss Army knife analogy, as HPC has historically opted for the best possible tool for a given job even if that meant developing it in-house. But that paradigm is changing. Commoditization is entrenched in HPC – and the maker of the iconic Cray-1 is leading the charge on this aspect of the convergence.
Here’s Boros, describing the evolution of Hadoop: “It was initially conceived in the service provider world where huge staffs maintain thousands upon thousands of cheap white boxes by spending their days applying the DevOps equivalent of duct tape and bailing wire with Perl scripts and Ruby,” he writes. “It’s somewhat green, and still requires a great deal of fiddling to get right. And it’s out of the box design seems to be a full 180 degrees off of anything HPC stands for. But I believe it’s going to have a prominent role in your datacenter in the not too distant future.”