Middleware Is Cool

By Tiffany Trader

April 16, 2013

There was a quote that made the rounds during Adaptive Computing’s annual user conference, MoabCon, last week in Park City, Utah. Upon his departure from Facebook last year, founder of the social media behemoth’s data analysis team, Jeff Hammerbacher, stated: “The best minds of my generation are thinking about how to make people click ads…[and] that sucks.”

During his keynote address, Adaptive Computing CEO Rob Clyde shared these words and then addressed the roomful of HPCers.

“Well, I can tell you that’s not what we do in this business,” he stated. “We are trying to cure cancer and perform rocket science, and do amazing things with predicting the weather, and ocean currents, and seismic research – some of the most relevant things that are happening in the world, and our industry is involved in that.”

Adaptive’s Cool Cred

Despite the long list of impressive accomplishments that middleware enables, it generally fails to elicit the same excitement as, say, brand-new leadership class hardware. But middleware is cool and you don’t have to take Adaptive’s word for it. This newfound status was issued by none other than technology analyst firm, Gartner, Inc., which included Adaptive in its “Cool Vendors in Cloud Management, 2013” report.

The report, which covers five vendors who are providing cloud management platform and/or cloud migration capabilities, is aimed at “CIOs, vice presidents (VPs) and directors of IT, as well as enterprise and infrastructure architects looking to deliver cloud-based, on-demand services that require infrastructure optimization (workload balancing).” Gartner notes that “service providers may also be interested in this solution, due to its ability to optimize the infrastructure, thus dropping service delivery costs.”

The Adaptive CEO was honored by the recognition. As he shared with HPCwire, the company’s cloud management product, Moab Cloud Suite, enables IT architects and the enterprises they work for to realize cloud’s promise of maximum return on investment through the optimization of resource utilization.

At its core, Adaptive’s cloud solution relies on the same Moab intelligence engine as the vendor’s HPC suite, which supports ground-breaking science and technology by delivering policy-based governance to the largest systems in the world, the ones engaged in hero problems, like curing deadly diseases and protecting our nuclear arsenal.

On the analyst’s website, Gartner Vice President Michele Cantara describes the qualities of a Cool Vendor. “A cool vendor is a smaller lesser-known vendor – someone who provides innovative technology or services,” she says. “And they’re lesser known because they’re less mature and they haven’t gotten attention from the media or Gartner.”

The Adaptive CEO agrees with the assessment, noting that Adaptive’s commitment to innovation is reflected in the company’s extensive patent portfolio, one of the largest related to private cloud computing. “We work hard to push the envelope of what is possible and have invented many of the core concepts behind HPC scheduling and private cloud optimization,” adds Marketing VP Chad Harrington.

Gartner observes that cool vendors are a good source of leading indicators about what’s to come. On that note, Clyde says that private cloud will continue to grow. He observes that many of the problems of private cloud have already been solved on the HPC side, for example scalability and efficient use of resources. The CEO referred to a recent survey on server utilization put out by the Uptime Institute that showed a global average efficiency rating of less than 10 percent. As energy continues to be a constraint on systems large and small, efficient system usage will become essential, and this is a major focus for the company.

Getting the Cool Vendor stamp of approval is also a good indicator of a recipient’s future success. The analyst has profiled more than 1,400 cool vendors since 2004, and 70 percent are still operating and in business, while 21 percent have been part of a merger or acquisition.

Next >> Adaptive @Scale

Adaptive @Scale

The past 12 months have been particularly fruitful for the company. The Oak Ridge Titan supercomputer, one of their customers, is the reigning TOP500 champ, and the University of Tennessee’s Beacon machine, another Moab system, is number one on the Green500 list.

“We love big, complex systems,” the CEO shared during his MoabCon keynote. “We certainly can handle others, but we want to make sure that we can run on the largest of the large. Our theory is if we can run on the largest systems, then we can run on everything else.”

He observes that Adaptive’s partners share a similar strategy: “Cut your teeth on the big complex tasks, and the rest falls into place.”

If a prospective customer asks, “How do we know your product will scale?” Adaptive can respond: “Well we already run on the largest systems in the world.”

For a small company of just over 100 employees, Adaptive has a big presence as the largest provider of HPC and private cloud workload management software. I ask Clyde how they do it, and he doesn’t miss a beat: “It’s our partners and customers,” he responds. Adaptive has strong ties to nearly all the major labs and solid relationships with HPC rock stars such as Cray, HP, IBM and Intel.

Inaugural Adaptie Awards

The conference also set the stage for the first annual Adaptie Awards, which recognize organizations and individuals that have pushed the envelope on technological progress. There were three awards in all.

Best Use of Moab in a Private Cloud was given to Bank of America. The financial institution was honored for using Adaptive Computing’s Moab Cloud Suite for its high density, service oriented virtualized compute platform. An early innovator in private cloud, the bank runs one of the most advanced, large scale privately managed IT setups of its kind.

Best Use of Moab in HPC went to the National Oceanic and Atmospheric Administration (NOAA). The federal agency was chosen for its pioneering use of Adaptive Computing’s Moab HPC Suite to develop better models for predicting climate variability and change.

The Lifetime Achievement award was presented to Don Maxwell, HPC systems team lead at Oak Ridge National Laboratory (ORNL), home of the Titan supercomputer. Maxwell has made many significant contributions to the HPC industry. He was instrumental in providing both requirements and testing for the initial port of Moab to the Cray X-series platform. In 2008, he was awarded the distinguished ACM Gordon Bell Prize for helping the ORNL Jaguar supercomputer achieve 400+ teraflops sustained performance. Currently, Maxwell is helping Titan achieve its performance goals. Maxwell is held in high-esteem by his peers, which was clear from the audience’s reaction to his winning the award.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

How the United States Invests in Supercomputing

November 14, 2018

The CORAL supercomputers Summit and Sierra are now the world's fastest computers and are already contributing to science with early applications. Ahead of SC18, Maciej Chojnowski with ICM at the University of Warsaw discussed the details of the CORAL project with Dr. Dimitri Kusnezov from the U.S. Department of Energy. Read more…

By Maciej Chojnowski

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is enjoying a prosperity seen only every few decades, one driven Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, produ Read more…

By John Russell

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that delivers up to 75 Gb/s per rack on industry standard hardware combined with “enterprise-grade reliability and manageability.” Read more…

By Doug Black

How the United States Invests in Supercomputing

November 14, 2018

The CORAL supercomputers Summit and Sierra are now the world's fastest computers and are already contributing to science with early applications. Ahead of SC18, Maciej Chojnowski with ICM at the University of Warsaw discussed the details of the CORAL project with Dr. Dimitri Kusnezov from the U.S. Department of Energy. Read more…

By Maciej Chojnowski

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is en Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that delivers up to 75 Gb/s per rack on industry standard hardware combined with “enterprise-grade reliability and manageability.” Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This