Cluster Resources All About Empowerment

By By Derrick Harris, Editor

March 6, 2006

GRIDtoday spoke with Cluster Resources CTO David Jackson about the unique capabilities of the company's Moab family of solutions, which includes cluster, Grid and utility computing suites. Said Jackson: “We do what we do well, which is empower [companies] to deliver their skills seamlessly, efficiently and reliably.”



GRIDtoday:
First, I'd like to ask how it's is going at Cluster Resources. Is everything running smoothly and going according to plan?

DAVID JACKSON: Thank you for this opportunity; it is an honor to be here. Cluster Resources continues to experience rapid growth and we look forward to the opportunities that continue to come our way. Over the years, we've enjoyed working with industry visionaries and many of the world's largest HPC organizations helping them realize their objectives. In the process, we gained a lot of expertise and it has helped propel us into a leadership position in this rapidly evolving industry. Now, many of the technologies we pioneered years ago are moving into the mainstream and from a business perspective, this transition has been excellent for us.

Gt: Can you tell me about the Moab Grid Suite? What unique benefits does it offer over other grid management products?

JACKSON: Moab Grid Suite is designed to bring together resources from diverse HPC cluster environments. It is currently used across grids that span single machine rooms and others that span nations. Moab's approach to managing these resources helps overcome some long-standing hurdles to Grid adoption by providing simplicity, sovereignty, efficiency and flexibility.

Moab provides an integrated cluster and grid management solution within a single tool, eliminating an entire layer of the standard grid software stack. With Moab, if you know how to manage a cluster, then you are ready to manage a grid. In fact, with some customers it has taken less than a minute to expand a working cluster into a full-featured grid. Moab's resource transparency allows users to take advantage of the new Grid resources with next to no changes in end user experience. For them, the grid is seamlessly connected to the local cluster, they submit the same jobs, run the same commands, and under the covers, Moab manages, translates, and migrates workload and data as needed to utilize both local and remote resources.

Another hurdle to Grid adoption has always been the protection of cluster level sovereignty. People are hesitant to lose control over their resources in spite of the benefits grids offer. With Moab, each participant is able to fully control his involvement in the grid, managing both job and information flow. They can specify ownership and QoS policies and control exactly when, where and how resources will be made available to external requestors.

As you probably already know, Moab is widely recognized for its industry- leading levels of optimization resulting in outstanding cluster performance in terms of both utilization and targeted response time. We have extended these same technologies to Grid, allowing very effective Grid solutions, even in environments with complicated political constraints, heterogeneous resources and legacy infrastructure.

A further major hurdle to Grid adoption is managing widely diverse resources. Moab is unique in that it is already running on virtually every OS and architecture and most major professional and open batch systems including TORQUE, LSF, PBSPro, Loadleveler, SLURM and others. It can operate with or without Globus, and supports multiple security paradigms as well as multiple job and data migration protocols. When a customer approaches us, we do not mandate a replacement of their existing infrastructure, but rather help them use Moab's flexibility to orchestrate their existing environment.

These concepts brought together offer a flexible solution that requires surprisingly little training, is very intuitive for the end user, and can effectively deliver on each of the major benefits of Grid computing.

Gt: How many customers do you have for the Grid suite? In what industries are most of the customers involved?

JACKSON: Use of Moab Grid technology is widespread and continues to grow Rapidly, but giving exact values is difficult because our products intentionally blur the line between clusters and grids. Moab offers a full spectrum of Grid technologies providing multi-cluster scheduling, enterprise- level monitoring and management, information services, Grid portals, job translation, centralized identity and allocation management, job staging, data staging, credential mapping, etc. Consequently, many sites are using Moab's Grid tools and technologies as a natural extension of their clusters and, without even knowing it, have enabled a grid across their systems. I think this is the way it should be. In the beginning of Grid, there were many sites afraid to take the “big leap” into Grid because they feared breaking what they had; they feared the unknown. With Moab, there really isn't a leap. You flip a bit and you are sharing jobs, flip a bit, and you are coordinating Grid accounting. It's just a natural extension of the familiar cluster.

In fact, as part of this blurring of lines, our Cluster Suite includes the ability to connect up a local-area-grid. Only when you begin to need more complex data staging and credential mapping is the Moab Grid Suite even required.

Regarding industries, I recently looked at a report showing our customer breakdown and it was all over the place. We are in financial, oil and gas, research, manufacturing, academic and everything in between. Because of our roots of inter-operating with all major batch systems, we've had to develop a superset capability. We have found that this has opened many doors for us and our customers are drawn by cost- effectiveness, simplicity, scalability and flexibility, not by industry.

Gt: Cluster Resource's Moab Utility/Hosting Suite offers an interesting approach to utility computing by letting users host their own resources, much like several of the large IT vendors (e.g., Sun Grid, IBM Deep Computing On Demand, etc.). Has there been a lot of interest in this service thus far?

JACKSON: We are very excited about utility computing, as we see this being the next natural step in the evolution of grids. The technology adoption time frame is long, but interest continues to grow and the benefits we've provided to clients have been both significant and pervasive. For example, one Fortune 500 customer increased the amount of services they were able to provide by 300 percent in the first year, and a different Fortune 500 customer was able to increase their customer base by over 50 times with Moab effectively exposing their services to customers via utility computing.

In a nutshell, what we offer with the Moab Utility/Hosting Suite is the ability to intelligently provision, customize, allocate and tightly integrate remote resources. This technology applies to both batch and non-batch environments, and many, many usage scenarios. Imagine a cluster where a user submits jobs and eventually the cluster fills up and responsiveness slows. Suddenly, the cluster gets bigger, all the jobs run to completion, and then the cluster shrinks back down again. Imagine a cluster where you submit a job requesting a compute architecture that does not exist. Moments later, that resource exists and your job runs. Imagine losing 16 nodes due to a hard drive failure and by the time you get back from lunch, Moab has notified you of the failure, created a reservation over the failed nodes, sent a replacement request off to your hardware provider, and replaced every failed node with an equivalent hosted computing node. Your boss says, “Nice job, perfect uptime again this month!”

Imagine setting up a business relationship with a utility computing hosting center that absolutely guarantees resource availability on fixed days and times, or guarantees a fixed number of cycles per week or guarantees a one- hour response time for unplanned resource consumption. Imagine being able to host not just compute resources, but a full customized service on demand. Offer data mining of a massive data set, offer regression testing services across a wide array of architectures and environments, offer not just software, but the full environment required to use that software.

Moab can provide this right now and, when you think about it, it seems quite natural that this is the way things should have been done all along. How do you say no to this type of solution? Organizations can use Moab to tap into IT vendor resources or can set up their own hosting solution for internal and external customers.

It is important to understand that utility computing is not just about making raw compute resources available on demand. It is making them custom, secure, guaranteed, tightly integrated and seamless. Our patented software allows IT vendors to ship a product that enables fully automated or “touch of a button” connectivity to the hosting center or service, with dynamic security, service level guarantees, automated billing, all with one button.

Gt: What led to this approach versus the company trying to sell its resources to users?

JACKSON: We are an enablement company. We create technology and software that allows other organizations to really capitalize on their offerings. Google made a smart move when it chose to not make content. Google is exceptional in what it does, but it does not compete with “subject experts.” Remember that utility computing is more than delivering raw cycles, it is about delivering a full compute environment ready to accomplish a specific task. An organization that works with oil and gas companies will already have relationships with them, it will know what network, storage and compute solutions work best, and will know what security constraints must be satisfied. Our software allows such an organization to automatically customize and deliver this environment in minutes — on demand, on-the-fly. This company probably knows more about its customer than we will ever know, and it makes sense that they offer this service.

We worked with Amazon to enable their recently announced online Internet data mining service. We didn't know much about mining the entire Internet and we did not need to. Amazon knew their data, their services, and their customers. We helped them set up a system where a user presses a button and, on-the-fly, a new cluster is built from scratch with secure network, compute and storage facilities. The source data is automatically pre-processed, the compute nodes are customized, the needed applications are automatically started, and an entire data mining environment is created in minutes. With our system, Amazon was able to take their expertise and scale it, allowing them to focus on what they do best and delivering the benefits to a far larger customer base.

Another space we are currently working in is to provide large security-focused government organizations with instant access to vast-quantities of additional HPC resources in the event of a national disaster. Moab Utility/Hosting Suite is being used by these government organizations to instantly overflow national emergency workload onto participating government, academic and corporate sites. At first this sounds like a grid, but, in reality, each of these sites is a separate environment that is prepared only for the local workload until Moab adapts these environments with many needed changes to create a cohesive environment that is able to respond to the national disaster.

Tier 1 and tier 2 hardware vendors already have a relationship with their customers. It would make sense for them to provide cluster overflow and emergency failover-based utility computing services. They know the customers and the technology being shipped. Our job is to empower these vendors to provide this service more effectively and efficiently than they could ever do on their own.

“Boutique” utility computing allows any software or IT service company to deliver complete custom “solutions” to its customers using insight and relationships we can never hope to have. We do what we do well, which is empower them to deliver their skills seamlessly, efficiently and reliably.

I think this is a pure win-win situation. We win, the customers win and the vendors win.

Gt: Do you know whether more users of the utility/hosting suite are using the solution for internal or external purposes?

JACKSON: Its a mix. Right now, we see more “soft” utility computing for internal purposes and more “hard” utility computing for external purposes. Soft utility computing is being used to enable condominium clusters, dynamically reconfigurable grids, automated failure recovery and other services. Hard utility computing is driving the big allocations of raw resources with provisioning of full customized service environments.

Gt: There is sometimes confusion about cluster computing vs. Grid computing vs. utility computing. Seeing as how your company sells solutions for all three, can you do your best to clarify these terms?

JACKSON: This is an industry with fluid terms, so any definition we give will be subject to debate. And, again, we are a workload and resource management company, so our focus is based on what tasks are required to fully optimize these systems. With these caveats, we see cluster computing focusing on maximizing the delivered science of one or more clusters under a single administrative domain. Grid computing focuses on bringing together resources that are under administrative domains with diverse mission objectives but with a common goal of extracting maximal performance across all systems. Proper Grid computing allows each organization complete independence and creates a consistent, easy-to-use global view for management and optionally end users. Grids and Grid relationships are generally worked out ahead of time and are generally static.

Utility computing is the next frontier. It is taking everything that is good about clusters and grids and adding the ability to first, dynamically establish relationships, and second, build complete compute environments. These relationships are completely flexible, but encompass new service guarantees, charging and workload management protocols. The compute environments can be built on-the-fly and are holistic, incorporating network, storage, compute and software resources together with supporting services.The key to utility computing is perfect transparency and tight integration. When the customer needs it, his cluster just gets bigger or changes to become what is needed for the workload. When a node or a network goes down, it gets replaced. Yes, there is a lot of things going on behind the scenes to create this magic, but, to the end user, it's all magic.

Building on the shoulders of Grid, utility computing allows the next generation of high performance computing and data centers, the true on demand vision.

Gt: Can you tell me a little about your background in HPC? It looks like you've done quite a bit before coming to Cluster Resources.

JACKSON: I've had the good fortune of working with many leaders in the Grid effort, as my career has taken me to IBM, NCSA, SDSC, LLNL, MHPCC, PNNL and a few other locations before starting with Cluster Resources. These early days also involved consulting and volunteer work directly helping over 1,000 sites manage their clusters. These experiences were invaluable and helped shape not only our cluster, Grid, and utility computing products, but our whole approach in delivering it. I found that organizations that were highly competent and highly agile were also a joy to work with and that we could jointly enable new technologies to overcome any obstacle in amazingly short amounts of time. Other organizations did not seem to get this paradigm and, though very big, were unable to detect the pulse of the industry.

We have tried very hard to keep that agility alive at Cluster Resources, with dozens of joint research projects throughout the world, solid relationships with many of the industry visionaries and a support team that generally resolves all issues in under two hours. Through this combination, we have found amazing customer loyalty. In fact, over the years, we have not lost a single customer!

Gt: I see you're a founding member of the GGF scheduling working group. How active are you in the GGF right now?

JACKSON: I was fortunate to be involved with the GGF and its precursor organizations way back in the very early days. In fact, so early that we could fit all sites around one small table! That was definitely enjoyable talking about these grand ideas and world-changing technologies. I have a lot of good memories from those days. Over the years, we've continued to be involved with GGF in many different ways working on protocols, directions and standards, though less involved in the formal meetings.

Gt: Finally, I'm wondering if you could give our readers a little insight into your life outside of the office. What are your personal hobbies and interests? What are your plans for when your working days are done?

JACKSON: In terms of hobbies, I am an avid hiker with a particular love of high mountains and narrow slot canyons.

There is no question what I'm doing when my working days are done — I'm farming! I spent most of my growing up years on a farm in Idaho and absolutely loved it. There's just something about fresh air and hard work that makes the soul feel good. We learned to work very hard and do it right and that experience is something I very much want to share with my kids.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers in Neuroscience this month present IBM work using a mixed-si Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even in the U.S. (which has a reasonably fast average broadband Read more…

By Oliver Peckham

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and our HPC family. Rich was born in Milwaukee, Wisconsin on April 12, 1962. His Read more…

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the dominant primate species, with the neanderthals disappearing b Read more…

By Oliver Peckham

Discovering Alternative Solar Panel Materials with Supercomputing

May 23, 2020

Solar power is quickly growing in the world’s energy mix, but silicon – a crucial material in the construction of photovoltaic solar panels – remains expensive, hindering solar’s expansion and competitiveness wit Read more…

By Oliver Peckham

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia said revenues for the period ended April 26 were up 39 perc Read more…

By Doug Black

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Wafer-Scale Engine AI Supercomputer Is Fighting COVID-19

May 13, 2020

Seemingly every supercomputer in the world is allied in the fight against the coronavirus pandemic – but not many of them are fresh out of the box. Cerebras S Read more…

By Oliver Peckham

Startup MemVerge on Memory-centric Mission

May 12, 2020

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved datacenter systems Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This