Cray Launches Hadoop into HPC Airspace

By Nicole Hemsoth

October 15, 2014

There has been little doubt that the convergence of traditional high performance computing with advanced analytics has been steadily underway, fed in part by a rush of new tools, frameworks and platforms targeting the big data deluge.

And when one thinks about the old guard of supercomputing, Cray is one of the first vendors to come to mind, although they’ve been steadily ramping up efforts to mesh into the broader world of enterprise systems with their own slant on the big data phenomenon. First came the company’s Urika graph analytics appliance just a little over two years ago, which has powered everything from large-scale life sciences applications to the big leagues. As of this morning, Cray will be adding a second machine to their lineup of big data-geared systems with the Urika-XA platform—a Cloudera-based Hadoop appliance, which happens to sport some of the best the HPC world has to offer in terms of hardware.

cray_xa_longHadoop appliances are nothing new, with a stream of vendors from well outside of the entrenched HPC vendor list (and plenty within it) offering their own hardware wrapped around the framework. However, what Cray has done is putting the emphasis on actual performance, with some additional benefits in terms of integration and manageability. While a 1500-plus-core Haswell variant will be made available in the first part of 2015, the 48-node Urika-XA appliance sports Ivy Bridge processors in its first-run stage available now, as well as 36 TB of memory, 38 TB of SSD (200 TB storage total with HDD and Lustre via their Sonexion 900), Infiniband, and the Lustre file system, which will complement access to the native HDFS that still pushes many MapReduce and Hadoop applications.

Before we touch on the specific hardware choices, one of the more fascinating features of the appliance is its reliance on the large-scale data processing engine, Apache Spark, which while still only a tentative topic for HPC, offers a great deal of promise in terms of emerging use cases for Hadoop. The early production workloads around Spark are best highlighted by the handling of interactive querying of data, iterative data mining, and tackling streaming data. There has been a great deal of attention around Spark, fed in part by a push from Hadoop distribution giant, Cloudera, and it has gained significant momentum and development over the last 12-18 months, according to Cray’s VP of Analytics Products, Ramesh Menon.

It was difficult not to start with the toughest question for Ramesh first…”Why throw so much high performance hardware (and software) at Hadoop, which itself was never designed for anything but large-scale batch jobs?” The fact is, he says, Hadoop and the use cases around it have evolved, especially with more enterprise and HPC workloads moving from experimentation to production who are looking for more integration than most appliances allow and fewer failures than commodity approaches. These aspects, along with their own tailoring to ensure existing MapReduce jobs are supported and sped through the addition of handling the intermediary steps of that code with the SSD and their Cray Adaptive Runtime software, they can boost the old and make performance room for the new.

Aside from the performance angle, the real benefit for production Hadoop users is that it replaces the long chain of systems in the data analysis pipeline. While this is the goal of any appliance, Menon says that as they talked to existing users of advanced analytics in enterprise HPC, they were doing a great deal of work on multiple clusters, which had its own costs, but when they did take the appliance route, they found themselves locked in and unable to innovate using new ecosystem tools. To highlight this TCO chain, Menon provided the following:

cray_xa_consolidation

There has been some exciting research on Infiniband for Hadoop (this in particular) but just as Lustre tends to lag behind native HDFS for Hadoop deployments, Cray is taking a chance on the fact that customers who are seeing actual real-world big value out of the ecosystem around Hadoop are also experiencing the performance lag that makes it unsuitable for anything but what it was designed for. We’ve understood that Lustre is growing in prominence in enterprise, but where does this fit in when most jobs are built to run on HDFS or other ecosystem-spurred alternatives (via Cassandra/DataStax, Ceph, Tachyon, etc.). According to Menon, HDFS and Lustre are both supported in order to enable the old and new to work together. For existing MapReduce code, HDFS is present and sped by the SSD layer with Lustre serving as the ideal file system for the new breed of streaming in-memory applications. This is backed by the Cray Adaptive Runtime for Hadoop, which integrates the whole of both generations of applications.

Unlike the questions some might have around the addition of Lustre and Inifiniband, there has been a great deal of work around the use of SSDs for Hadoop and MapReduce workloads. For users of the appliance who wish to continue using existing MapReduce code, this is especially beneficial because it can significantly speed the intermediate steps around the shuffle stages, where a lot of the congestion happens.

As Menon stressed, “Infiniband is only in the platform because part of the challenge is to make sure we have all the benefits of an appliance where we can optimize to a known stack. The different direction from our current Hadoop appliance users is that they’re locked in on the software side. If there’s a new project in the Hadoop ecosystem it’s hard if not impossible to run it on the appliance. With the appliance approach, we provide pre-integrated package so they know the foundational elements in that high performance work with Lustre, the SSDs, and so forth and can let them scale all of that as they need.”

Cray will be releasing benchmarks in the future that demonstrate the value of the high performance technologies like Infiniband and Lustre for Hadoop environments, which we will be looking forward to. Menon says the typical microbenchmarks that serve the Hadoop community well aren’t representative of what they want to demonstrate but we’ll certainly stay tuned for those. As with the first Urika machines, any pricing details are a shrouded in mystery. But with Inifiniband, SSDs, Haswell, Lustre, and their own unique software integrating the whole package, it’s safe to assume this is not for the Hadoop hobbyist.

“We see use cases in a lot of the traditional HPC areas, of course,” said Menon, “but enterprise adoption of Hadoop is creating a new set of requirements that this targets specifically and very well.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At Long Last, Supercomputing Helps to Map the Poles

August 22, 2019

“For years,” Paul Morin wrote, “those of us that made maps of the Poles apologized. We apologized for the blank spaces on maps, we apologized for mountains being in the wrong place and out-of-date information.” Read more…

By Oliver Peckham

Xilinx Says Its New FPGA is World’s Largest

August 21, 2019

In this age of exploding “technology disaggregation” – in which the Big Bang emanating from the Intel x86 CPU has produced significant advances in CPU chips and a raft of alternative, accelerated architectures... Read more…

By Doug Black

Supercomputers Generate Universes to Illuminate Galactic Formation

August 20, 2019

With advanced imaging and satellite technologies, it’s easier than ever to see a galaxy – but understanding how they form (a process that can take billions of years) is a different story. Now, a team of researchers f Read more…

By Oliver Peckham

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Keys to Attracting the Newest HPC Talent – Post-Millennials

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

For engineers and scientists growing up in the 80s, the current state of HPC makes perfect sense. Read more…

Singularity Moves Up the Container Value Chain

August 20, 2019

The enterprise version of the Singularity HPC container platform released this week by Sylabs is designed to allow users to create, secure and share the high-end containers in self-hosted production deployments. The e Read more…

By George Leopold

At Long Last, Supercomputing Helps to Map the Poles

August 22, 2019

“For years,” Paul Morin wrote, “those of us that made maps of the Poles apologized. We apologized for the blank spaces on maps, we apologized for mountains being in the wrong place and out-of-date information.” Read more…

By Oliver Peckham

IBM Deepens Plunge into Open Source; OpenPOWER to Join Linux Foundation

August 20, 2019

IBM today announced it was contributing the instruction set (ISA) for its Power microprocessor and the designs for the Open Coherent Accelerator Processor Inter Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This