Univa CEO Gary Tyreman on the Evolution of HPC, Big Data and Cloud

By Tiffany Trader

May 7, 2012

It’s been a little over a year since Univa took over stewardship of the open source workload manager and acquired the founding Sun Grid Engine team from Oracle, and in that time, they’ve stabilized the product and implemented over 200 bug fixes. Last week, the company announced its third production release, Univa Grid Engine 8.1, which is scheduled for general availability in the first half of 2012.

HPC in the Cloud spoke with Univa CEO Gary Tyreman to learn more about the offering and discuss the company’s strategy around cloud computing.

Gary TyremanThe latest release is targeted at decreasing the TCO of Grid Engine at scale, notes Tyreman. Univa has sought to increase availability by improving the stability of the product, a core focus over the last year. They’ve added features that target very high-volume clusters with a large number of jobs, in particular small jobs. Plus they’ve made changes to improve the performance of the cluster overall, which is also good news for those using it in large environments.

They’ve also been focused on streamlining tasks from the administrator’s perspective by helping that person find information faster, diagnose issues sooner and by assisting them with on-boarding and managing new applications in the workflow in a much more seamless and easy manner.

The team is happy with the progress they’ve made. “Univa Grid Engine is an evolution of a product and a path forward,” Tyreman posits. “But more importantly it’s a drop-in replacement, so it’s really an upgrade, as opposed to a rip-and-replace. That’s the first thing we’re most proud of.”

By focusing on product stability as well as the performance and availability of the cluster, the company has experienced record sales and substantial customer growth. In Q1 of this year, Univa added more customers than in all of 2011. Tyreman points to additional proof points, such as having four of the top five sites as measured by core-count, and notes that four of the top five enterprise or commercial customers have upgraded to Univa Grid Engine versus the open source version or Oracle.

Not surprisingly, the majority of customer sites are still in the traditional science and engineering space, but Univa is seeing a significant uptick in big data and business applications, so-called non-traditional HPC applications. In addition to classic HPC workloads, like semiconductor, EDA, life sciences, bio-genomics, oil and gas, and digital media, Univa Grid Engine customers are using (or asking about using) the product in a Hadoop environment. Rather than dealing with the headaches and costs associated with setting up both a Hadoop cluster and a compute cluster, they are bringing the two together. The other new trend is coming from ISVs who are using Univa’s Grid Engine software to run business applications.

Perhaps what makes this data point all the more telling is that it’s not part of a concentrated effort. Tyreman’s take on it is that the market is pulling them in this direction. Like everyone else in the industry and ecosystem, he feels that Univa is benefiting from the so-called mainstreaming of HPC. The fact that they’ve seen several of their customers running Univa Grid Engine in a production Hadoop environment speaks to this point. “I think it’s good for us and the overall industry,” says Tyreman.

Almost every customer outside of EDA is asking Univa about Hadoop and big data, the CEO tells me. From this he infers that executives are asking how these technologies can help them solve their problems. Yes, they could go with Hadoop and buy a new cluster, but doing so would require a significant capital outlay. Bringing Hadoop into an existing cluster allows them to test the waters without a huge investment.

Addressing the needs of these new markets involves a change to the Hadoop environment at an API level, notes Tyreman. It involves broadening and simplifying the API so that your Web 2.0 developer can interface with the product. “The Hadoop integration that we have has a lot of opportunity for improvement as the demand expands,” he adds.

Tyreman explains that for much of 2011 and 2012 the company was highly focused on their Grid Engine offering, however cloud, and specifically HPC cloud, is a key tenet of Univa’s strategy. Univa built the first Grid Engine cloud and the first Grid Engine hybrid cloud, the CEO points out. “Both of which were used by enterprises in production. Both of which were used to solve real-world problems. And these were completed more than two years ago,” he emphasizes.

Univa has Grid Engine customers that are trying to figure out how to pull in resources from a Eucalyptus cloud, and how to use those systems and then push them back into Eucalyptus. They have customers that are looking into building out hybrid infrastructures using public cloud providers like Amazon. The company has successfully integrated with Puppet to allow customers to plug into the cloud ecosystem.

“It’s all about adding value to Univa Grid Engine to ensure that other IT assets that have been deployed are being fully leveraged to make the system easier to use and easier to manage on a day-to-day basis,” says Tyreman.

Univa’s cloud product, UniCloud, is available through RightScale and via the Amazon Marketplace that was launched last week. They have several customers who have already implemented UniCloud and others who are looking into adopting similar solutions, which Univa is working to provide.

That said, the Univa CEO does not view HPC, big data and cloud as delineations onto themselves. “We don’t see them as three things that require three hammers; they are fundamentally similar problems that need to be solved.”

Taking that one step further, Tyreman says that Hadoop environments today are basically clusters, and clusters require scheduling. As a proof point of big data and HPC coming together, Tyreman points to IBM’s acquisition of Platform Computing. Hadoop has kicked off a project to build a scheduler. OpenStack, the poster child for cloud, is currently building its own scheduler. You have all these industry examples, and all these separate parties reinventing the wheel, and “I’m selling rubber,” says Tyreman. “We see those things as being very tied together.”

“When you take a single backplane that can run compute, big data and other types of workloads, which is what the IBM acquisition was directed at,” says Tyreman, “you need something to build and manage the applications that you provision into that environment and that’s what UniCloud is being designed and tailored to do.”

“The fact that we took a step back and focused on Grid Engine is really a tactical step, but it’s also a recognition that the industry is not exactly where we are. So by the end of this year you will see us deliver the next version of UniCloud, which will be specifically targeted at managing those applications within those broader contexts that we have been talking about.”

When asked about the challenge of licensing in the cloud, Tyreman replies that Univa is working on a new offering directed toward companies who are using very expensive licenses and want to share them. “The goal with that product,” he says, “is to enable very complex environments to share licenses and therefore you don’t need to buy as many, specifically for EDA, for example.”

“Licensing within the cloud encompasses the same problem,” notes the CEO. “It will continue to take time for people to work around it. A lot of ISVs keep talking about it, but people fear the different models without understanding it. There is concern about cannibalizing an existing revenue stream.

“If you go and look at the Amazon Marketplace, the Univa Grid Engine that is available there, pricing is posted. Take the number of hours in a day, multiply by days in a year, and divide by price per core and I’m pricing it exactly the same. There are no premiums. And if you choose to go with that cloud model, we have a second price structure, which is all you can eat for a fixed price. We did that on purpose so we wouldn’t become part of that licensing fear discussion. We can move past it pretty quick.”

The next version of UniCloud is scheduled to arrive in Q4, and will add a graphical interface. Univa is also preparing for a fourth Grid Engine release, scheduled to roll-out in early 2013, which will add features “to drive the value at scale.”

“For a small company,” says Tyreman, “we have a pretty aggressive engineering roadmap and delivery mechanisms,” adding “we spend a lot of time with large core-count users that have very specific problems that can trickle down and add value for the smaller sites.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

PRACEdays Reflects Europe’s HPC Commitment

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at t Read more…

By Tiffany Trader

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurr Read more…

By Doug Black

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Nvidia CEO Predicts AI ‘Cambrian Explosion’

May 25, 2017

The processing power and cloud access to developer tools used to train machine-learning models are making artificial intelligence ubiquitous across computing pl Read more…

By George Leopold

HPE Extreme Performance Solutions

Exploring the Three Models of Remote Visualization

The explosion of data and advancement of digital technologies are dramatically changing the way many companies do business. With the help of high performance computing (HPC) solutions and data analytics platforms, manufacturers are developing products faster, healthcare providers are improving patient care, and energy companies are improving planning, exploration, and production. Read more…

PGAS Use will Rise on New H/W Trends, Says Reinders

May 25, 2017

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly kn Read more…

By James Reinders

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Hedge Funds (with Supercomputing help) Rank First Among Investors

May 22, 2017

In case you didn’t know, The Quants Run Wall Street Now, or so says a headline in today’s Wall Street Journal. Quant-run hedge funds now control the largest Read more…

By John Russell

IBM, D-Wave Report Quantum Computing Advances

May 18, 2017

IBM said this week it has built and tested a pair of quantum computing processors, including a prototype of a commercial version. That progress follows an an Read more…

By George Leopold

PRACEdays Reflects Europe’s HPC Commitment

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at t Read more…

By Tiffany Trader

PGAS Use will Rise on New H/W Trends, Says Reinders

May 25, 2017

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly kn Read more…

By James Reinders

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Cray Offers Supercomputing as a Service, Targets Biotechs First

May 16, 2017

Leading supercomputer vendor Cray and datacenter/cloud provider the Markley Group today announced plans to jointly deliver supercomputing as a service. The init Read more…

By John Russell

HPE’s Memory-centric The Machine Coming into View, Opens ARMs to 3rd-party Developers

May 16, 2017

Announced three years ago, HPE’s The Machine is said to be the largest R&D program in the venerable company’s history, one that could be progressing tow Read more…

By Doug Black

What’s Up with Hyperion as It Transitions From IDC?

May 15, 2017

If you’re wondering what’s happening with Hyperion Research – formerly the IDC HPC group – apparently you are not alone, says Steve Conway, now senior V Read more…

By John Russell

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

HPE Launches Servers, Services, and Collaboration at GTC

May 10, 2017

Hewlett Packard Enterprise (HPE) today launched a new liquid cooled GPU-driven Apollo platform based on SGI ICE architecture, a new collaboration with NVIDIA, a Read more…

By John Russell

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

Since our first formal product releases of OSPRay and OpenSWR libraries in 2016, CPU-based Software Defined Visualization (SDVis) has achieved wide-spread adopt Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Last week, Google reported that its custom ASIC Tensor Processing Unit (TPU) was 15-30x faster for inferencing workloads than Nvidia's K80 GPU (see our coverage Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a ne Read more…

By Tiffany Trader

Leading Solution Providers

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which w Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling Read more…

By Steve Campbell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Eng Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

As China continues to prove its supercomputing mettle via the Top500 list and the forward march of its ambitious plans to stand up an exascale machine by 2020, Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu's Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural networ Read more…

By Tiffany Trader

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of "quantum supremacy," researchers are stretching the limits of today's most advance Read more…

By Tiffany Trader

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" process Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This