Cluster Resources All About Empowerment

By By Derrick Harris, Editor

March 6, 2006

GRIDtoday spoke with Cluster Resources CTO David Jackson about the unique capabilities of the company's Moab family of solutions, which includes cluster, Grid and utility computing suites. Said Jackson: “We do what we do well, which is empower [companies] to deliver their skills seamlessly, efficiently and reliably.”



GRIDtoday:
First, I'd like to ask how it's is going at Cluster Resources. Is everything running smoothly and going according to plan?

DAVID JACKSON: Thank you for this opportunity; it is an honor to be here. Cluster Resources continues to experience rapid growth and we look forward to the opportunities that continue to come our way. Over the years, we've enjoyed working with industry visionaries and many of the world's largest HPC organizations helping them realize their objectives. In the process, we gained a lot of expertise and it has helped propel us into a leadership position in this rapidly evolving industry. Now, many of the technologies we pioneered years ago are moving into the mainstream and from a business perspective, this transition has been excellent for us.

Gt: Can you tell me about the Moab Grid Suite? What unique benefits does it offer over other grid management products?

JACKSON: Moab Grid Suite is designed to bring together resources from diverse HPC cluster environments. It is currently used across grids that span single machine rooms and others that span nations. Moab's approach to managing these resources helps overcome some long-standing hurdles to Grid adoption by providing simplicity, sovereignty, efficiency and flexibility.

Moab provides an integrated cluster and grid management solution within a single tool, eliminating an entire layer of the standard grid software stack. With Moab, if you know how to manage a cluster, then you are ready to manage a grid. In fact, with some customers it has taken less than a minute to expand a working cluster into a full-featured grid. Moab's resource transparency allows users to take advantage of the new Grid resources with next to no changes in end user experience. For them, the grid is seamlessly connected to the local cluster, they submit the same jobs, run the same commands, and under the covers, Moab manages, translates, and migrates workload and data as needed to utilize both local and remote resources.

Another hurdle to Grid adoption has always been the protection of cluster level sovereignty. People are hesitant to lose control over their resources in spite of the benefits grids offer. With Moab, each participant is able to fully control his involvement in the grid, managing both job and information flow. They can specify ownership and QoS policies and control exactly when, where and how resources will be made available to external requestors.

As you probably already know, Moab is widely recognized for its industry- leading levels of optimization resulting in outstanding cluster performance in terms of both utilization and targeted response time. We have extended these same technologies to Grid, allowing very effective Grid solutions, even in environments with complicated political constraints, heterogeneous resources and legacy infrastructure.

A further major hurdle to Grid adoption is managing widely diverse resources. Moab is unique in that it is already running on virtually every OS and architecture and most major professional and open batch systems including TORQUE, LSF, PBSPro, Loadleveler, SLURM and others. It can operate with or without Globus, and supports multiple security paradigms as well as multiple job and data migration protocols. When a customer approaches us, we do not mandate a replacement of their existing infrastructure, but rather help them use Moab's flexibility to orchestrate their existing environment.

These concepts brought together offer a flexible solution that requires surprisingly little training, is very intuitive for the end user, and can effectively deliver on each of the major benefits of Grid computing.

Gt: How many customers do you have for the Grid suite? In what industries are most of the customers involved?

JACKSON: Use of Moab Grid technology is widespread and continues to grow Rapidly, but giving exact values is difficult because our products intentionally blur the line between clusters and grids. Moab offers a full spectrum of Grid technologies providing multi-cluster scheduling, enterprise- level monitoring and management, information services, Grid portals, job translation, centralized identity and allocation management, job staging, data staging, credential mapping, etc. Consequently, many sites are using Moab's Grid tools and technologies as a natural extension of their clusters and, without even knowing it, have enabled a grid across their systems. I think this is the way it should be. In the beginning of Grid, there were many sites afraid to take the “big leap” into Grid because they feared breaking what they had; they feared the unknown. With Moab, there really isn't a leap. You flip a bit and you are sharing jobs, flip a bit, and you are coordinating Grid accounting. It's just a natural extension of the familiar cluster.

In fact, as part of this blurring of lines, our Cluster Suite includes the ability to connect up a local-area-grid. Only when you begin to need more complex data staging and credential mapping is the Moab Grid Suite even required.

Regarding industries, I recently looked at a report showing our customer breakdown and it was all over the place. We are in financial, oil and gas, research, manufacturing, academic and everything in between. Because of our roots of inter-operating with all major batch systems, we've had to develop a superset capability. We have found that this has opened many doors for us and our customers are drawn by cost- effectiveness, simplicity, scalability and flexibility, not by industry.

Gt: Cluster Resource's Moab Utility/Hosting Suite offers an interesting approach to utility computing by letting users host their own resources, much like several of the large IT vendors (e.g., Sun Grid, IBM Deep Computing On Demand, etc.). Has there been a lot of interest in this service thus far?

JACKSON: We are very excited about utility computing, as we see this being the next natural step in the evolution of grids. The technology adoption time frame is long, but interest continues to grow and the benefits we've provided to clients have been both significant and pervasive. For example, one Fortune 500 customer increased the amount of services they were able to provide by 300 percent in the first year, and a different Fortune 500 customer was able to increase their customer base by over 50 times with Moab effectively exposing their services to customers via utility computing.

In a nutshell, what we offer with the Moab Utility/Hosting Suite is the ability to intelligently provision, customize, allocate and tightly integrate remote resources. This technology applies to both batch and non-batch environments, and many, many usage scenarios. Imagine a cluster where a user submits jobs and eventually the cluster fills up and responsiveness slows. Suddenly, the cluster gets bigger, all the jobs run to completion, and then the cluster shrinks back down again. Imagine a cluster where you submit a job requesting a compute architecture that does not exist. Moments later, that resource exists and your job runs. Imagine losing 16 nodes due to a hard drive failure and by the time you get back from lunch, Moab has notified you of the failure, created a reservation over the failed nodes, sent a replacement request off to your hardware provider, and replaced every failed node with an equivalent hosted computing node. Your boss says, “Nice job, perfect uptime again this month!”

Imagine setting up a business relationship with a utility computing hosting center that absolutely guarantees resource availability on fixed days and times, or guarantees a fixed number of cycles per week or guarantees a one- hour response time for unplanned resource consumption. Imagine being able to host not just compute resources, but a full customized service on demand. Offer data mining of a massive data set, offer regression testing services across a wide array of architectures and environments, offer not just software, but the full environment required to use that software.

Moab can provide this right now and, when you think about it, it seems quite natural that this is the way things should have been done all along. How do you say no to this type of solution? Organizations can use Moab to tap into IT vendor resources or can set up their own hosting solution for internal and external customers.

It is important to understand that utility computing is not just about making raw compute resources available on demand. It is making them custom, secure, guaranteed, tightly integrated and seamless. Our patented software allows IT vendors to ship a product that enables fully automated or “touch of a button” connectivity to the hosting center or service, with dynamic security, service level guarantees, automated billing, all with one button.

Gt: What led to this approach versus the company trying to sell its resources to users?

JACKSON: We are an enablement company. We create technology and software that allows other organizations to really capitalize on their offerings. Google made a smart move when it chose to not make content. Google is exceptional in what it does, but it does not compete with “subject experts.” Remember that utility computing is more than delivering raw cycles, it is about delivering a full compute environment ready to accomplish a specific task. An organization that works with oil and gas companies will already have relationships with them, it will know what network, storage and compute solutions work best, and will know what security constraints must be satisfied. Our software allows such an organization to automatically customize and deliver this environment in minutes — on demand, on-the-fly. This company probably knows more about its customer than we will ever know, and it makes sense that they offer this service.

We worked with Amazon to enable their recently announced online Internet data mining service. We didn't know much about mining the entire Internet and we did not need to. Amazon knew their data, their services, and their customers. We helped them set up a system where a user presses a button and, on-the-fly, a new cluster is built from scratch with secure network, compute and storage facilities. The source data is automatically pre-processed, the compute nodes are customized, the needed applications are automatically started, and an entire data mining environment is created in minutes. With our system, Amazon was able to take their expertise and scale it, allowing them to focus on what they do best and delivering the benefits to a far larger customer base.

Another space we are currently working in is to provide large security-focused government organizations with instant access to vast-quantities of additional HPC resources in the event of a national disaster. Moab Utility/Hosting Suite is being used by these government organizations to instantly overflow national emergency workload onto participating government, academic and corporate sites. At first this sounds like a grid, but, in reality, each of these sites is a separate environment that is prepared only for the local workload until Moab adapts these environments with many needed changes to create a cohesive environment that is able to respond to the national disaster.

Tier 1 and tier 2 hardware vendors already have a relationship with their customers. It would make sense for them to provide cluster overflow and emergency failover-based utility computing services. They know the customers and the technology being shipped. Our job is to empower these vendors to provide this service more effectively and efficiently than they could ever do on their own.

“Boutique” utility computing allows any software or IT service company to deliver complete custom “solutions” to its customers using insight and relationships we can never hope to have. We do what we do well, which is empower them to deliver their skills seamlessly, efficiently and reliably.

I think this is a pure win-win situation. We win, the customers win and the vendors win.

Gt: Do you know whether more users of the utility/hosting suite are using the solution for internal or external purposes?

JACKSON: Its a mix. Right now, we see more “soft” utility computing for internal purposes and more “hard” utility computing for external purposes. Soft utility computing is being used to enable condominium clusters, dynamically reconfigurable grids, automated failure recovery and other services. Hard utility computing is driving the big allocations of raw resources with provisioning of full customized service environments.

Gt: There is sometimes confusion about cluster computing vs. Grid computing vs. utility computing. Seeing as how your company sells solutions for all three, can you do your best to clarify these terms?

JACKSON: This is an industry with fluid terms, so any definition we give will be subject to debate. And, again, we are a workload and resource management company, so our focus is based on what tasks are required to fully optimize these systems. With these caveats, we see cluster computing focusing on maximizing the delivered science of one or more clusters under a single administrative domain. Grid computing focuses on bringing together resources that are under administrative domains with diverse mission objectives but with a common goal of extracting maximal performance across all systems. Proper Grid computing allows each organization complete independence and creates a consistent, easy-to-use global view for management and optionally end users. Grids and Grid relationships are generally worked out ahead of time and are generally static.

Utility computing is the next frontier. It is taking everything that is good about clusters and grids and adding the ability to first, dynamically establish relationships, and second, build complete compute environments. These relationships are completely flexible, but encompass new service guarantees, charging and workload management protocols. The compute environments can be built on-the-fly and are holistic, incorporating network, storage, compute and software resources together with supporting services.The key to utility computing is perfect transparency and tight integration. When the customer needs it, his cluster just gets bigger or changes to become what is needed for the workload. When a node or a network goes down, it gets replaced. Yes, there is a lot of things going on behind the scenes to create this magic, but, to the end user, it's all magic.

Building on the shoulders of Grid, utility computing allows the next generation of high performance computing and data centers, the true on demand vision.

Gt: Can you tell me a little about your background in HPC? It looks like you've done quite a bit before coming to Cluster Resources.

JACKSON: I've had the good fortune of working with many leaders in the Grid effort, as my career has taken me to IBM, NCSA, SDSC, LLNL, MHPCC, PNNL and a few other locations before starting with Cluster Resources. These early days also involved consulting and volunteer work directly helping over 1,000 sites manage their clusters. These experiences were invaluable and helped shape not only our cluster, Grid, and utility computing products, but our whole approach in delivering it. I found that organizations that were highly competent and highly agile were also a joy to work with and that we could jointly enable new technologies to overcome any obstacle in amazingly short amounts of time. Other organizations did not seem to get this paradigm and, though very big, were unable to detect the pulse of the industry.

We have tried very hard to keep that agility alive at Cluster Resources, with dozens of joint research projects throughout the world, solid relationships with many of the industry visionaries and a support team that generally resolves all issues in under two hours. Through this combination, we have found amazing customer loyalty. In fact, over the years, we have not lost a single customer!

Gt: I see you're a founding member of the GGF scheduling working group. How active are you in the GGF right now?

JACKSON: I was fortunate to be involved with the GGF and its precursor organizations way back in the very early days. In fact, so early that we could fit all sites around one small table! That was definitely enjoyable talking about these grand ideas and world-changing technologies. I have a lot of good memories from those days. Over the years, we've continued to be involved with GGF in many different ways working on protocols, directions and standards, though less involved in the formal meetings.

Gt: Finally, I'm wondering if you could give our readers a little insight into your life outside of the office. What are your personal hobbies and interests? What are your plans for when your working days are done?

JACKSON: In terms of hobbies, I am an avid hiker with a particular love of high mountains and narrow slot canyons.

There is no question what I'm doing when my working days are done — I'm farming! I spent most of my growing up years on a farm in Idaho and absolutely loved it. There's just something about fresh air and hard work that makes the soul feel good. We learned to work very hard and do it right and that experience is something I very much want to share with my kids.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This