Cracking the Silos of Custom Workflows

By Nicole Hemsoth

February 27, 2014

In high performance computing, the time-honored concept of creating tailored workflows to address complex requirements is nothing new. However, with the advent of new tools to analyze and process data—not to mention store, sort and manage it—traditional ways of thinking about HPC workflows are falling by the wayside in favor of new approaches that might help balance, stabilize and shatter siloed environments.

Moving beyond HPC specifically, there are certainly plenty of options for managing large-scale, diverse workflows that are designed specifically for cloud environments, and increasingly, for “big data” workflows that require orchestration between custom and commercial analytics stacks, involving hops from private or public clouds, into Hadoop and over to other analytics engines. The issue is, while there are dedicated tools for addressing workflow demands of HPC environments specifically (GridEngine, Platform, Adaptive, etc.) , or cloud environments in particular (OpenStack, etc.), some, including Adaptive Computing, argue that there are no tools that tackle HPC, cloud and the new range of big data opportunities all together—and in a way that’s primed for the custom workflow models that are so often found in some of the most complex enterprise and research datacenters.

In their experience with large organizations including NOAA, the Department of Defense and others, Adaptive Computing has had the opportunity to look under the hoods of some complicated engines for doing everything from oil exploration to addressing national security concerns. These users—and around 60% of those they recently surveyed across the public and private sector (beyond HPC exclusively) tended to have custom, homegrown workflows, which often leads to a host of problems, including a lack of flexibility to adopt new tools, time consumption spent on manually handling the complexity, and of course, overall inefficiency across the datacenter.

That 60% is a striking figure when one considers that the advent of new tools being considered to address the growing bevy of “big data” problems means more custom scripting and management of an already top-heavy stack. According to Adaptive Computing’s Jill King, this means the addition of more silos, which is exactly the opposite of what’s needed for mission-critical environments. Adaptive’s answer to this complexity is called Big Workflow, which for now means addressing these homegrown environments with a different type of glue than has been used to bind many of the HPC centers they’ve worked with over the last ten years with Moab.

King says that for many datacenter environments across the HPC and big data spectrum, the logjam happens at the important processing stage for complex data. This is currently very manual, time-consuming and laden with dependencies and, according to conversations they’ve had across multiple organizations, they’re finding a lot of “both over and under-utilized silos with long, complicated cues that simply aren’t efficient. “There’s a great need to unify, optimize and guarantee these environments,” King said.

Adaptive Computing senior architect, Daniel Hardman offered detail on Big Workflow, which is both an approach that requires custom tuning for homegrown environments via dedicated work with customer needs—as well as offering some new hooks for big data analytics tooling.

As you can see below, there are several separate silos, all governed at the top by what’s very often either a sophisticated homegrown or off-the-shelf system. Generally, says Hardman, there are not efficient ways of connecting the top level with the many silos below—and further, that top level framework can be connected across many parts of the datacenter and spectrum of needs; for example, that same level might be governing general business operations, a Hadoop cluster, an interface to a cloud pulling data off storage, and an HPC environment on top of all of that. It’s quite possible to do all of this—but it’s hardly efficient or manageable and leads to inflexibility given the processes that need to be worked in manually for custom workflows when new hooks are needed or something changes.

BigWorkflow1

“There a big need for an engine that’s capable of implementing a policy-based engine across these silos,” said Hardman. “We already sell Moab cloud suite, and a comparable product in HPC, and offer an integration with the Intel Hadoop side, and although we’ve not done a lot on public cloud it’s also possible. What’s needed then is something that makes it so a user’s custom glue can tap into some efficiency and automation–a new kind of coordination so that our Big Workflow coordinator can contact these silos and make things happen across those many silo boundaries.”

The goal of Big Workflow (the coordinator is not a product as much as an approach rooted in some new hooks they’ve provided to big data sources via the Intel Hadoop distro and more) is to provide all the logic in hard-coded scripts, which Hardman says can eliminate a lot of the duct tape with management across these silos. They’re still there, but the distinction between them is a lot less painful.

BigWorkflow2

“Most people in IT think about equilibrium—keep things humming-if things get broken, they get fixed. The problem is that big data is not friendly to that; it has an interesting relationship to storage in that it may not be convenient to think about those silo boundaries anymore. For example, I might have the same dataset, which begins its life in a public cloud, then I need to process and massage it in Hadoop, then perform some HPC computation on it after that, but that data may have all sorts of issues (privacy, regulatory, etc) I can’t just move it around or pretend that it’s local when it’s not. It has to be managed with policies that understand data movement, staging, management and more. Big data makes this Big Workflow coordination mandatory.”

For those familiar with Adaptive or using it already, there is the addition of the “data expert” concept, which is the smart part of the engine that has to be able to understand all about the data (lifecycle, forms through that lifecycle and its movement, size, who owns it, where can it be copied or not). This is coupled with some of the new automation for custom workflows found in the coordinator. As King explained, “we’re integrating with these custom workflows for now,  then we’ll branch out and make these standards but with so many people having custom workflows, we need to be able to provide something flexible.” In other words, Adaptive is using lessons learned with customers like Digital Globe (see a detailed writeup of how this works in action over at EnterpriseTech) to propel their work for custom environments and implement those lessons as standards to help broaden their APIs and reach into more areas, including tools like Tivoli, for instance.

This approach should resonate for folks in HPC and enterprise circles, according to IDC. “Our 2013 study revealed that a surprising two thirds of HPC sites are now performing big data analysis as part of their HPC workloads, as well as an uptick in combined uses of cloud computing and supercomputing,” said Chirag Dekate, Ph.D., research manager, High-Performance Systems at IDC. “As there is no shortage of big data to analyze and no sign of it slowing down, combined uses of cloud and HPC will occur with greater frequency, creating market opportunities for solutions such as Adaptive’s Big Workflow.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Kyoto University ACCMS Implements Fine-grained Power Management

September 19, 2018

Data center power management is a ubiquitous challenge and in few places is it more so than at Kyoto University Academic Center for Computing and Media Studies (ACCMS)) where power consumption limits were imposed followi Read more…

By Staff

What’s New in HPC Research: September (Part 1)

September 18, 2018

In this new bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back every Read more…

By Oliver Peckham

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and development. Among other things it would establish a National Quantu Read more…

By John Russell

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

A Crystal Ball for HPC

People are notoriously bad at predicting the future.  This very much includes experts. In the Forbes article “Why Most Predictions Are So Bad” Philip Tetlock discusses the largest and best-known test of the accuracy of expert predictions which show that any experts would do better if they make random guesses. Read more…

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU--and a refresh of its inference server software packaged as Read more…

By George Leopold

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU- Read more…

By George Leopold

DeepSense Combines HPC and AI to Bolster Canada’s Ocean Economy

September 13, 2018

We often hear scientists say that we know less than 10 percent of the life of the oceans. This week, IBM and a group of Canadian industry and government partner Read more…

By Tiffany Trader

Rigetti (and Others) Pursuit of Quantum Advantage

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle Read more…

By John Russell

How FPGAs Accelerate Financial Services Workloads

September 11, 2018

While FSI companies are unlikely, for competitive reasons, to disclose their FPGA strategies, James Reinders offers insights into the case for FPGAs as accelerators for FSI by discussing performance, power, size, latency, jitter and inline processing. Read more…

By James Reinders

Update from Gregory Kurtzer on Singularity’s Push into FS and the Enterprise

September 11, 2018

Container technology is hardly new but it has undergone rapid evolution in the HPC space in recent years to accommodate traditional science workloads and HPC systems requirements. While Docker containers continue to dominate in the enterprise, other variants are becoming important and one alternative with distinctly HPC roots – Singularity – is making an enterprise push targeting advanced scale workload inclusive of HPC. Read more…

By John Russell

At HPC on Wall Street: AI-as-a-Service Accelerates AI Journeys

September 10, 2018

AIaaS – artificial intelligence-as-a-service – is the technology discipline that eases enterprise entry into the mysteries of the AI journey while lowering Read more…

By Doug Black

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This