Cluster Resources Takes Aim at Commercial Datacenter

By Michael Feldman

August 5, 2009

Cluster Resources, the middleware vendor that brought its Moab cluster management technology to market, is making a major move to broaden its footprint beyond the company’s high performance computing base. As of this week, the company will be known as Adaptive Computing to reflect its expansion into commercial datacenters and private cloud environments. The new organization will encompass two business units: Cluster Resources, for its traditional HPC customers, and Adaptive Computing for the commercial enterprise side.

In fact, the company had been heading in this direction for some time. Founded in 2001, Cluster Resources developed its Moab cluster workload management product line around the Maui open source technology. At the time, cluster computing was generally confined to high performance computing. But over the next several years, clusters thoroughly infiltrated the enterprise, and by 2007, Cluster Resources’ business was split 50-50 between the HPC and the commercial datacenter markets. Commercial customers today include Yahoo and a number of financial institutions, among others.

As clusters became the platform of choice throughout the industry, systems grew larger and were employed to support a much wider variety of workloads — database machines, real-time transactional systems, Web application platforms, and so on. Today the trend is to consolidate datacenter infrastructure everywhere, and that means these large multi-faceted facilities now resemble supercomputers to a great degree. In both environments, datacenter-level virtualization and workload automation are often the norm.

Datacenter-level virtualization and automation also happen to be the foundation for cloud architectures. In a cloud, all of the infrastructure (compute, storage and networks) as well as software licenses, are treated as a shared pool of resources. So the same approach of abstracting software from individual hardware components is now being applied across the entire datacenter landscape.

Since Moab’s main strength is managing disparate architectures at the level of the datacenter, Cluster Resources saw this convergence as an opportunity to leverage its core technology. With that in mind, the company has introduced the Moab Adaptive Computing Suite, the flagship product of the datacenter business unit. In addition to the workload management tasks, it also includes features that support the type of computing more commonly associated with the typical enterprise application, namely system-level virtualization. Unlike HPC, in other large-scale computing environments, it’s common to have multiple applications running concurrently on a cluster node, or even a single CPU.

“The main difference that we’re seeing between supercomputers and commercial enterprise sites is that the workload types are slightly different,” explains Peter ffoulkes, Adaptive Computing’s vice president of marketing. Specifically, there tend to be a lot of transactional workloads in the enterprise. But ffoulkes also notes there is a blurring of workload characteristics between the two areas. For example, enterprise applications such as business intelligence (BI) are both data and resource intensive. In fact almost all informatics applications have these profiles and they span HPC and the more traditional enterprise space.

The big OEMs are also reflecting this convergence in their latest servers aimed at scaled-out datacenters and in their general approach to next-generation computing. IBM (Dynamic Infrastructure), HP (Adaptive Infrastructure), and Cisco (Unified Computing) are all pushing their system architectures into this model, in one variation or another. Since IBM and HP are also close partners with Cluster Resources, we can expect to see the new Adaptive Computing technology show up on future deployments.

According to ffoulkes, their relationship with both IBM and HP has been extended. For IBM, this means that Moab-based products are now being sold with IBM part numbers alongside Big Blue servers (for example iDataPlex systems) and the xCat and Tivoli provisioning products. HP has also expanded the partnership to integrate the Moab technology into its HP iLO (integrated lights out management) and HP SA (server automation) software to create a “Dynamic Workload Utility” for scaled-out environments.

Although the HPC side of the company is now under its own business unit, ffoulkes said they will maintain their commitment to the supercomputing market. Today 60 percent of the top systems are powered by Moab technology, including the top two machines: Roadrunner at Los Alamos National Lab, and the Jaguar system at Oak Ridge. More recently, the University of Southampton announced it had ordered a 1,000-node IBM (iDataPlex) supercomputer that will include the Adaptive HPC Suite for workload management. That system is intended to run both Linux and Windows applications in a wide range of research areas, including climate, pharmaceuticals, bioscience, nanoscience, medical and chemical systems, transport, the environment, and engineering.

One fortuitous side-effect of the company’s dual focus is that organizations that run mixed HPC-enterprise workloads can use Adaptive Computing as a one-stop shop for workload management. For example, a bank may run transactional workloads during the day and risk management, portfolio pricing and the Sarbanes-Oxley compliance reporting overnight. If possible, the firm would like to use the same infrastructure to do all this. While not that common, ffoulkes says some larger organizations are motivated to build these scaled-out generic datacenters that can be repurposed for heterogeneous applications on-demand.

“We’re very much seeing this type of convergence, but it’s happening in a rather a spotty sort of fashion,” says ffoulkes. “It really depends upon where companies are starting from and what they’re able to do. But we’re seeing it on both sides — in the HPC world and the commercial datacenter world.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Installation of Sierra Supercomputer Steams Along at LLNL

November 20, 2017

Sierra, the 125 petaflops machine based on IBM’s Power9 chip and being built at Lawrence Livermore National Laboratory, sometimes takes a back seat to Summit, the 180 petaflops system being built at Oak Ridge National Read more…

By John Russell

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visitors to the Colorado Convention Center in Denver for the larg Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some cases, city managers didn’t even know existed. Speaking Read more…

By Doug Black

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

HPE Launches ARM-based Apollo System for HPC, AI

November 14, 2017

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-base Read more…

By Doug Black

OpenACC Shines in Global Climate/Weather Codes

November 14, 2017

OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This